Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Development and testing of an infrared target tracking system Smith, Richard Lloyd 1990

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-UBC_1990_A7 S64.pdf [ 4.45MB ]
JSON: 831-1.0098012.json
JSON-LD: 831-1.0098012-ld.json
RDF/XML (Pretty): 831-1.0098012-rdf.xml
RDF/JSON: 831-1.0098012-rdf.json
Turtle: 831-1.0098012-turtle.txt
N-Triples: 831-1.0098012-rdf-ntriples.txt
Original Record: 831-1.0098012-source.json
Full Text

Full Text

Development and Testing Of An Infrared Target Tracking System Richard Lloyd Smith B.Sc. University of Calgary, Calgary, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August 1990 © Richard Lloyd Smith, 1990 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada DE-6 (2/88) Abst ract A novel design for an active point ranging and tracking instrument intended for large work volume applications, is presented in this thesis. The proposed instrument employs two tracking stations at a known baseline distance in order to triangulate on, and dynamically track a pulsed infrared target A prototype instrument, consisting of a single target tracking station and modulated target, has been built and tested. Each target tracking station is composed of a gimbaled mirror optic deflection system and an infrared sensitive camera. The angular resolving capability of the target tracking station is approximately 0.1 degrees when locked on a static target. The target tracking station is able to fol low a target moving at a maximum speed of 4.9 meters per second at a distance of 1 meter. Results o f static and dynamic testing performed on separate components of the prototype instrument, and on the complete target tracking station are presented. i i Table of Contents Abstract i i List of Tables iv List of Figures v Acknowledgments v i i i 1 Introduction 1 1.1 Problem Formulation 1 1.2 Overview of Current Ranging Methods 2 1.2.1 Triangulation 4 Passive Triangulation Systems 6 Act ive Triangulation Systems 7 1.2.2 Interferometry 8 1.2.3 Radars 10 1.2.4 Focusing 11 1.2.5 Scene Constraint 13 1.3 System Design and Theory O f Operation 13 2 Camera System 20 2.1 Receiver Subsystem 20 2.1.1 Lens System 22 2.1.2 Posit ion Sensitive Detector 22 2.1.3 Instrumentation 26 L o w Noise Preamplification A n d Biasing Stages 26 Phase Sensitive Detector A n d Timing Circuitry 29 Variable Ga in Stages A n d Ga in Control 35 Sum A n d Difference Stages 35 2.1.4 Processor 35 2.2 Target Subsystem 36 2.3 Camera Static Testing 37 2.3.1 Fine Camera Resolution 39 2.3.2 Central Area Gr id Mapping 40 2.3.3 Wide Area Gr id Mapping 41 2.4 Camera Dynamic Testing 43 3 Mirror Optic Deflection System 46 3.1 Functional Description 46 3.2 Control Motor Mode l 50 3.3 Beta Motor Spring Compensation 51 3.4 Experimentally Determined Motor Parameters 52 3.5 Motor P ID Controllers and Motor Step Response 54 4 Tracking System Modelling and Control 57 4.1 Camera Ga in Controller 58 4.2 Camera Mode l 59 4.2.1 Mathematical Mode l O f The Tracking System Optics 59 4.2.2 Target Image Movement About A Set Point 66 4.3 Target Tracking System Static Testing 71 4.4 Target Tracking System Dynamic Testing 74 5 Conclusions §1 5.1 Contributions °£ 5.2 Suggestions For Further Work ol References 84 ii i L i s t o f Tables 1.1 Basic Ranging Techniques 4 2.2 Infrared L E D Characteristics 37 3.3 Motor Performance Parameters 48 4.4 Static Testing: Approximate Theta and Beta Angles 71 4.5 Static Testing: P ID Controller Values 71 iv L is t o f F igures 1.1 Geometry For Triangulation Ranging 5 1.2 Synchronized Double Sided Mir ror Scanner Geometry 7 1.3 Laser Interferometer B lock Diagram 9 1.4 Basic Principle of B i d s Sensor 12 1.5 Schematic Representation: Single Target Tracking Station and Target . . 14 1.6 Cyl indr ical Triangulation Geometry 16 1.7 Planer Ranging To Target 17 2.8 Receiver Subsystem Block Diagram 21 2.9 One Dimensional Posit ion Sensitive Detector 23 2.10 Two Dimensional Posit ion Sensitive Detector 24 2.11 P S D Junction Capacitance vs. Bias Voltage 25 2.12 Simplif ied Front End 27 2.13 First Stage of Optical Subsystem: Idealized Waveforms 28 2.14 Phase Sensitive Detector 30 2.15 Timing Signals Derivation Waveform 31 2.16 Phase Sensitive Detector T iming Signals 31 2.17 Acceptance Strengths of the Phase Sensitive Detector as a Function of the Input Frequency 34 2.18 Block Diagram O f Target Subsystem Hardware 36 2.19 Static Testing Geometry 38 2.20 X - A x i s Fine Resolution 39 2.21 Y - A x i s Fine Resolution 40 2.22 Central Area Gr id Map 41 2.23 Wide Area Gr id Map . 42 2.24 Camera Step Response X Direction 44 v 2.25 Camera Step Response Y Direction 44 2.26 Step Response: Path Fol lowed On Detector Surface 45 3.27 Schematic of Mi r ror Scanner Gimbal 46 3.28 Angular Reference Axes 47 3.29 Mi r ror Optic Deflection System Components 48 3.30 Scanner Sweep Range 49 3.31 Beta Motor Spring Compensation Mapping 52 3.32 Theta Dr ive Ax i s : Plant Step Response 53 3.33 Beta Drive Ax i s : Plant Step Response 54 3.34 Theta Ax i s Step Response 55 3.35 Beta Ax i s Step Response 56 4.36 Target Tracking System Control Strategy 58 4.37 Optical System Mode l : Graphic Representation 60 4.38 Mi r ror Rotation Angle Definit ion 61 4.39 Determination of Mi r ror and Z - A x i s Intersection Point 63 4.40 P S D Coordinate Definitions 67 4.41 Imaged Target Posit ion About The Lock Centre (3 = 45 Degrees And 0 = 0 Degrees: /? Increment = 0.5 degrees & 6 Increment = 0.25 degrees . . . 68 4.42 Imaged Target Posit ion Maps Fo r Various Lock Centers: j3 Increment = 0.1 degrees & 6 Increment = 0.5 degrees 69 4.43 Comparison of Imaged Target Positions For Lock Centres /? = 20 to 70 Degrees A n d 0 = 0 Degrees: (3 Increment = 5.0 degrees & 6 Increment = 0.5 degrees 70 4.44 Histogram of Centered Target Test 72 4.45 Histogram of Right Target Test 73 4.46 Histogram of Left Target Test . 7 4 4.47 Dynamic Testing Configuration 75 4.48 Tracking Error Geometry 76 vi 4.49 40 RPM Test: Theta and Beta Angle vs Time 77 4.50 40 RPM Test: X And Y Detector Positions 77 4.51 150 RPM Test: Theta and Beta Angle vs Time .78 4.52 150 RPM Test: X And Y Detector Positions 78 5.53 System Working Volume 81 vii Acknowledgments I would l ike to thank Peter Lawrence for the suggestions and support he has given during each stage of development of this project. Thank you, to my friends and family for being so supportive. vi i i Chapter 1: Introduction Chapter 1 Introduction 1.1 Problem Formulation The impetus for the topic of this thesis, Development and Testing of an Infrared Target Tracking System, came from the desire to develop an instrument capable of tracking the endpoint of a large work volume manipulator. The positional information made available by such an instrument can be used 1. to perform kinematic calibration of the manipulator or, 2. to study the real dynamics of the arm, or 3. to aid in the endpoint control of a flexible manipulator. The desired instrument must have point tracking capability, a wide angle of v iew, the ability to range to widely variable depths, and requires moderate range resolving ability. Alternative applications of the designed instrument include : 1. the positioning of autonomous vehicles on an assembly line floor, 2. the study of mechanical or human motions; 3. the capture of surface contours in order to create computer 3 -D models of real objects; and 4. the alignment of spacecraft for docking preparation purposes. This abbreviated list contains only a fraction of all possible applications which demand instrumentation capable of both ranging and tracking. A single tracking station of the proposed tracking and ranging instrument designed to meet these general requirements has been developed. The proposed instrument is very accessible in terms of its modularity, cost, and appropriateness to a wide variety of applications. A prototype system has been built and tested in the laboratory so that theoretical design concepts could be tested in practice. The development of supporting technology such as range sensors naturally fol lows general system development. Consequently, the majority of ranging and tracking systems are developed with a specific application in mind. For example, consider the calibration of individual robotic arms as they are manufactured in order to account for particular machining tolerances. In this case, a knowledge of manipulator dynamics and kinematics must be acquired before considering 1 Chapter 1: Introduction arm calibration uti l iz ing ranging and tracking instrumentation. The aforementioned emphasis on application specific development has resulted in the evolution of a wide variety of ranging instruments employing a vast array of methods. However, the accessibility o f developed technology has been called into question. Experts in the field of range sensing have lamented the general lack of a supporting technology base [1][2]. A supporting technology base is an industrial information network which possesses and provides complete knowledge to the present date of a specific technology. Designers may find that this technological information is proprietary, therefore, new designs do not benefit from data collected about previously tried techniques. This preceding discussion exemplifies both the wide number of applications and the real need for the developed ranging and tracking instrument. The fol lowing sections contain a literature review of current methods of approaching the ranging problem, fol lowed by a more detailed description of the target tracking system that was constructed as wel l as the theory by which it operates. The succeeding chapters describe in further detail specific elements of the system as wel l as the tests performed, and the results obtained when the system underwent testing. 1.2 Overview of Current Ranging Methods The design of any ranging instrument must be subject to certain design specifications. O f prime consideration is whether the field of view and the range of the device are commensurate with the search volume of interest. The sensor must also provide data of sufficient accuracy and resolution to be compatible with a particular application. Collected data must be updated at a rate appropriate to the speed of the observed system. The interpretation of the collected range data should be simple; conversion of the instrumentation output data (angular or otherwise) into points in the reference coordinate frame should require a minimum of computation. Ideally the resulting system w i l l be small and modular, and could be easily manufactured and modified. The designer must weigh the merits of each of these criteria when selecting a particular range finding scheme, essentially performing a cost-benefit analysis of the available methods and designs. 2 Chapter 1: Introduction Current range sensors employ a large variety of techniques. Each of the techniques will be presented followed by a brief description of the basic principle or principles involved as well as an example of a system which employs the technique. Many of the examples given wil l be in the field of range imaging systems and therefore wil l address only the ranging function. A summary of the considered techniques is given in Table 1.1. This survey is meant to provide only an introduction to the topic of range sensing. Comprehensive surveys of range imaging techniques have already been compiled by several authors [3][1][4][5]. The infrared target tracking system I developed falls into the point active triangulation category. 3 Chapter 1: Introduction Basic Ranging Principle Divisions Specific Methods Triangulation Passive Triangulation Binocular Stereo [3][1] Trinocular Stereo Epipolar Motion Stereo Axial Motion Stereo Active Triangulation Point Source [6] [7] [8] Structured Light Line Texture [9] Multiple Points [10] Grid ... and others Imaging Radars Time of Flight Methods Pulse Detection [11] [12] Amplitude Modulation Frequency Modulation Streak Cameras Interferometry Moire Interferometry Projection Interferometry[3] Shadow Interferometry Laser Interferometry Conventional and Modified Methods [1] Focusing Passive Focusing Passive Focusing [1] Active Focusing Active Focusing [9] Scene Constraint Scene Constraint Defined Geometry Range From Occlusion Diffuse Reflection Table 1.1 Basic Ranging Techniques 1.2.1 Triangulation Triangulation provides a simple trigonometric method of determining the location of a point. Triangulation methods rely on the law of sines; this law is used to determine the length of the 4 Chapter 1: Introduction unknown sides of a triangle given that the length of one side and two of the angles of the triangle are known. The geometry of the system is as shown in Figure 1.1. Baseline B Observation1 Station 2 Figure 1.1: Geometry For Triangulation Ranging Given B, a, c, m, and n, the position of the point P is given by: B A C sin (b) sin (a) sin (c) xp = Ctan(n) =Atan(m) yp = A cos (c) = B — C cos (a) zp = Asin(c) = Csin(o) (1.1) (1.2) Triangulation systems must then determine the angles c, m, a, and n and the baseline distance B so that the position of the point P can be calculated. Any triangulation system is subject to the missing parts problem where the point P is obstructed from view when one of the vectors v l or v2 are optically obstructed due to the offset distance of the viewing positions[3]. Under these circumstances the algorithm fails and the range to the object cannot be determined. Spatial resolution of triangulation systems decreases as the distance to target increases. Triangulation ranging systems may be divided broadly into either passive or active categories. Passive systems rely only upon the ambient light available within the scene to illuminate the target whereas active systems use a controlled light source to select the salient feature of interest. 5 Chapter I: Introduction Passive Triangulation Systems Passive triangulation techniques include binocular stereo, trinocular stereo, epipolar motion stereo and axial motion stereo [3]. Each of these techniques use two or more captured images of the same scene from different predefined viewpoints. The displacement of the imaged positions of the same point, often called the disparity, is indicative of the range of the viewed point. The epipolar motion and axial motion techniques use the same camera which is translated a known distance after each measurement The epipolar motion technique translates the camera laterally in small increments after each image capture while the axial motion technique translates the camera in small increments along its optical axis. The binocular stereo method uses two cameras separated laterally from each other by a known baseline and with parallel optical axes. The binocular stereo method of ranging is closely related, on a very simplified level, to the basic principle by which human vision determines depth. The displacement of imaged positions of the same point as seen by each of the cameras provides the information to determine range to the point The trinocular stereo technique is an extension of the binocular stereo case which uses a third camera to form an orthogonal baseline with one of the original two cameras. Trinocular techniques provide an alternate viewing axis which allows the system to range to either horizontal or vertical lines [3]. The binocular and trinocular stereo techniques require a single image capture to determine object range. Cameras in stereo vision systems are normally aimed with parallel optical axes and therefore are not capable of converging their centre of vision on an observed point as in human vision. This constraint limits the working area of the device to the space where the fields of view of the cameras overlap [1]. The working volume of these devices is limited to the space within view of each camera in the system. A l l of the passive triangulation ranging methods rely on various techniques to determine the imaged position of a specific 3-D point seen by each of the cameras in the case of the stereo techniques or between subsequent images in the case of the motion techniques. Pairing of identical image points between different views is fundamental to the triangulation algorithm and is often referred to as the correspondence problem [1]. 6 Chapter I: Introduction Active Triangulation Systems Active triangulation ranging techniques overcome the correspondence problem of passive systems by introducing a structured light source. Identical image points may then be easily found using simple techniques such as thresholding. A simple active triangulation system can be configured if one of the cameras in a passive binocular stereo system is replaced by a laser light source. If this light can be directed into the scene with a known geometry, and if the remaining camera can detect the illuminated point and thereby confirm a commonly viewed point, then all the information necessary to compute the range to the point is provided, namely the directional vectors of the camera and laser and a predefined baseline distance. By simultaneously scanning both the camera and the laser the operating workspace of the system can be increased. A clever geometry for simultaneous scanning of the laser and camera has been used by Rioux et al as shown in Figure 1.2 [6][11]. This modification of the basic active triangulation geometry allows both camera and source to use the same scanning mechanics. This basic geometry was used in a prototype system with a working Figure 1.2: Synchronized Double Sided Mirror Scanner Geometry 7 Chapter 1: Introduction volume defined by a range from 10 cm to 1 m and a scan angle of 40 degrees. This system had a resolution of 256 angular increments and a scan rate of 25 profiles per second. The double sided mirror scans the same line across the scene on each rotatioa The scanner may sweep over a surface by moving the entire apparatus in position between scans or by adding a nodding mirror which scans different slices of the object of interest. There are many developed ranging systems which use active triangulation techniques. Re-searchers have used Selspot photodiode cameras in a binocular stereo setup to detect the position of an infrared emitting L E D [8]. Others, have used a sonic source and multiple sonic detectors in an active triangulation geometry [13]. A tracking and ranging system has been developed using gimbaled mirrors to reflect laser light off a cat's eye reflector [7][14][15]. This system also uses triangulation methods in order to determine target position. Another ranging system sweeps planes of light across a scene in a known time frame and uses camera pixel intensity to triangulate to points in the cameras field of view [16]. Variations of the basic active triangulation scheme introduce more complex lighting to the scene to try and range a larger number of points simultaneously. These methods project light patterns onto the scene of interest including multiple points, lines, stripes, grids [10], or some other texture. For example, the effect of projecting a line onto a scene from a light source displaced from the camera is to produce displacements in the imaged line which are proportional to depth. Other textures produce similarly exploitable effects. 1.2.2 Interferometry Interferometry takes advantage of the fringe effects created when waves of equal frequency interfere. Interferometry ranging techniques presendy in use include Moire interferometry and laser interferometry [3 ]. In Moire interferometry, a light projector illuminates the object of interest through a finely spaced grating thereby amplitude modulating the light source in the scene space. Spatial amplitude modulation refers to the property where close objects will be illuminated with closely spaced lines while far objects will be illuminated with more widely spaced lines. A camera offset from the 8 Chapter I: Introduction projector along a known baseline records the scene through another identical grating. The resulting image contains an interference pattern of low spatial frequency. The spatial frequency of this Moire pattern, along with the relative geometry of the source and camera, can be used to calculate the range of the object being illuminated. Moire techniques are effective when ranging to surfaces which are relatively smooth and which have surface normals which are more or less directed towards the light source and camera. These constraints make the technique suitable only for highly controlled measurements. With laser interferometry an interference pattern is created when a coherent light beam is split and forced to travel two different paths then recombined and lastly detected by a specialized fringe counter sensor [1]. The fringe counter sensor is able to determine phase differences between each of the light "signals". A typical configuration for a system employing laser interferometry is shown in Figure 1.3. Laser Optical Fibers • Retroreflector Beamsplitter Target Retroreflector fringe Detector Figure 1.3: Laser Interferometer Block Diagram The detected phase differences are directly related to the difference in the length of the path taken. The fringe detector is only able to measure differences in phase between the two signals and therefore absolute range is not measured. However, the relative range of an object from its initial starting position can be measured extremely accurately knowing present phase differences and the number of wave counts or equivalently fringe counts from the starting position. This technique requires a continuous line of sight between the measurement system and the target. The relative 9 Chapter 1: Introduction distance traveled by the target from its starting position is a cumulative quantity and is unknown if loss of sight of the target occurs at any time once a measurement reference is established. Also, laser interferometry normally requires a retroreflector to be fixed to the object of interest. Range resolutions in the order of 1/1000 of a wavelength or equivalently in the sub-nanometer range are possible under ideal conditions with this technique [3]. 1.23 Radars Imaging radar systems typically: 1. emit a well defined, directed signal; 2. detect this signal's spatially transformed echo; and 3. process these signals to derive ranges to viewed objects. If the speed of propagation and transit time of a signal are known, then the distance to the object point may be determined. Imaging radar methods measure the time difference between the transmitted and received signals, commonly called the time of flight, to determine object range. At optical frequencies the approximate speed of propagation of the signal is 3 x 108 meters/second or 0.3 meters/nanosecond. This fact demonstrates the need for highly precise time of flight measurement circuitry in order to produce optically based instruments which can finely resolve the range to the target. The transmitted signal may be a series of evenly spaced pulses, an amplitude modulated continuous wave signal or a frequency modulated signal. Systems which use pulsed signals must directly detect time differences between transmitted and received signals. Systems using this technique have achieved range resolution of approximately 1 mm with a depth of field from one to three meters. It has been speculated that the introduction of streak camera technology [3] capable of measuring time of flight in the sub-picosecond range would improve the range resolution of this technique. Phase differences between the received and transmitted signals are proportional to the time of flight of the signal for amplitude modulated continuous wave techniques. The disadvantage of using this technique is that absolute depth measurement is resolvable to only one half of a wavelength of the frequency of modulation, which is called the ambiguity interval. Relative resolutions within the ambiguity interval are dependent on the accuracy of the phase difference measurement between the transmitted and received signals. Frequency modulated radar systems use coherent mixing of the 10 Chapter 1: Introduction received signal with a reference signal in order to produce a beat frequency which is representative of the time of flight of the signal. Signal averaging has been used with both amplitude and frequency modulation methods in an attempt to improve ranging accuracy [11]. Al l of the preceding methods of detection for optical radar systems require highly precise time of flight measurement techniques which tend to be expensive. Costs can be reduced by employing a sonic energy source. Reduced expense occurs due to the decreased complexity of the required instrumentation to make the time of flight measurements for the relatively slower sound signal (approximately 1 foot/msec). However, sonar signals are difficult to direct accurately and may have littie returned energy from angled specular surfaces. Furthermore, sonar is sensitive to air pressure perturbations, temperature fluctuations, and secondary object reflections therefore this technique is best suited to static environment measurements or for rough range measurement. A survey specific to ultrasonic ranging techniques has been compiled by researchers [12]. A system intended for autonomous vehicle ranging has been developed using a sonar ranging technique combined with a small beam angle infrared "cane". This system uses a narrow beamwidth infrared range instrument' to detect object edges in an attempt to compensate for the sonar systems inability to range to angled specular surfaces and to give accurate directional information [17]. 1.2.4 Focusing Derivation of depth using focusing techniques may be either active, where the scene is illuminated with a known light pattern, or passive, in which no special lighting is used on the scene. Passive focusing techniques exploit the focal gradients within a small depth of field image. Only those objects within the depth of field of the lens are in focus producing high frequency intensity gradients about those edges or textures which are in focus. If the focal length of a camera is mechanically varied over a series of focuses and the individual image edge gradients are detected, then the depth of edges and textures recognized in each image can be estimated. This method is referred to as the swept focus technique [1]. Depth discrimination of edges using a fixed camera focus has also been investigated. If the camera is focused to the closest or farthest point of interest in the scene, then the focal gradient along an imaged line is indicative of 11 Chapter 1: Introduction the range of an individual point on the edge. Other passive focusing techniques exploit the change in depth of focus with changes in aperture to determine depth. Passive focusing techniques tend to require intensive data processing and present systems have relatively low range resolving ability and small fields of view. Active focusing techniques project a known light pattern on the scene. A currently available active focusing range sensor developed by Rioux and Blais uses a lens with an annular mask to produce a pair of lines on the camera for each line projected onto the scene [9], [18]. The distance between associated line pairs is indicative of the range to this part of the line on the scene. The basic geometry used in this instrument, called the Bins sensor, is shown in Figure 1.4. The point B represents the cross-section of the projected line, while points b and b are the displaced images of the point B on the detector surface. A single line projected on the object at the point B will appear as two lines at the points b on the detector plane. Resolutions of 1 mm over a 250 mm depth of field are quoted but range resolutions will be best for points closest to the lens as for triangulation techniques [3]. The field of view of this device is limited by the focal length of the lens and range resolutions will also decrease with increasing field of view. The Biris sensor produces output data which are easily transformed to range information. Lens Reference Plane Detector Plane Figure 1.4: Basic Principle of Biris Sensor Other active focusing techniques try to adjust the focus of a swept conical light beam which is projected onto the target [19]. The light spot appears as a high intensity point when focused on the 12 Chapter 1: Introduction object of interest or a less intense ring if out of focus. Range is determined by adjusting the projector focus for maximum reflected light intensity as detected by a photodiode camera . The prototype system was capable of resolving to 0.3 mm over a depth of field of 150 mm. Active focusing systems, being camera based, are subject to inaccuracies due to lens aberrations and also have relatively limited fields of view. 1.2.5 Scene Constraint Scene constraint ranging methods tend to be based on some global or a priori knowledge of the viewed scene. Scene constraints include known geometry constraints, general geometry constraints, relative range from occlusion, regular texture constraints, diffuse reflection constraints, length of shadows, and other features. These techniques attempt to simulate one or more aspects of the human visual systems ability to infer range based on previous learning or experience. For example, objects recognized as cars will be of a prescribed size in the human visual field, depending on the car's position and orientation; this information is used by the person viewing the object to derive an estimated range to the car. In the human visual system other scene constraints as well as ranging based on physical principles, such as binocular stereo, are used to infer the position of the car. Scene constraint methods often rely on prior segmentation of the image into regular objects before application of the given constraint may be effectively applied. Segmentation of the image into defined objects is one of the more significant difficulties to be overcome before scene constraint methods can be practically applied to the range imaging problem [3]. 1.3 System Design and Theory Of Operation The design of a point ranging and tracking instrument initially proceeded with endpoint tracking of large work volume manipulators as the intended application. The desired instrument must be inexpensive to produce, and requires a wide angle of view, and must have the ability to range to widely variable depths, and requires moderate range resolving ability. The instrumentation also requires point tracking capability in order to perform the proposed application. The ability of currently available instruments to perform point tracking and ranging functions in a large work volume is limited. 13 Chapter 1: Introduction • Pulsed Infrared Target f Light Ray / From Target Mirror / OBOOkMi Mirror Motor Drive And Angle WAMM P 2-D Position Light Spot Position Sensing Sensing On Theta Axis Sensitive Defector Mirror Motor Drive and Angle Sensinig Figure 1.5: Schematic Representation: Single Target Tracking Station and Target For example, the following techniques were rejected because of the indicated insufficiencies: 1. laser interferometry is capable of making extremely precise measurements but is expensive, requires recalibration whenever the target is lost and requires a very cooperative target; 2. sonar techniques, while inexpensive, are difficult to direct and susceptible to environmental factors; 3. passive stereo camera techniques require extensive image processing to resolve the correspondence problem; and 4. active stereo camera techniques have limited working volumes. The following discussion contains a functional description of the target tracking and ranging system which was developed for the proposed application along with an account of how this design meets the general system requirements. The proposed target tracking and ranging system uses triangulation techniques to follow a target point contained within the workspace of the system. The basic system requires a minimum of two target tracking stations to perform point tracking and ranging functions. The target tracking stations are separated by a known baseline and each station reports the inclination of a vector directed from that particular station to the target, thereby providing the required information to determine the coordinate of the target using standard triangulation methods. In the point tracking configuration an active target is affixed to the point of interest and each of the target tracking stations act as energy receivers and lock onto the target. Individual mirrors in each of the target tracking stations are oriented so that the image of the target is centered on the camera. The optical ray from the target to the camera defines a directional vector for each target tracking system. A schematic representation of a single target tracking station and target is shown in Figure 1.5. 14 Chapter 1: Introduction The prototype system, developed during the course of the research, consists of an active target and a single target tracking station. The target is a pulsed infrared L E D with a wide beam angle and the target tracking station consists of a unique gimbaled mirror drive system, an infrared sensitive camera, and associated signal processing and control electronics. A novel gimbaled mirror drive mechanism is used to achieve an extremely wide field of view for each of the target tracking stations [20]. The system uses a single mirror rotated on orthogonal axes in order to direct the target image onto the centre of the camera. The use of a single mirror avoids the constrained field of view and multiple reflection anomalies of dual mirror systems. The geometry of the gimbaled mirror system results in cylindrical angular output data which is conveniently interpreted. This cylindrical geometry is advantageous since aligning the axes of the two target tracking stations results in a reduction in the computational complexity of the target position calculations (Figure 1.6). The ranging ability of the instrument is increased by including a dynamic gain control in the camera electronics to compensate for changes in received signal strength. These two design characteristics, namely the gimbaled mirror system with its wide viewing angle and the camera with its auto gain control, significamly increase the working volume of the instrument The prototype system has moving parts which is a disadvantage as compared to currendy available endpoint tracking instruments [21] which use cameras alone. These competing systems employ a photodiode camera which determines the imaged positions of a multiple target array with known geometry between the targets to determine the range to the target array using triangulation. A similar scheme uses multiple cameras in a binocular triangulation geometry to range to the active target. Although these competing techniques require no moving parts the resolving ability of the system is inversely proportional to the angle of view of the camera lens. Also, nonlinearities due to lens aberrations generally increase with the field of view of the lens. These competing systems therefore work best for small work volumes. The system developed in this thesis uses a lens with a narrow angle of view and therefore maintains relatively high camera resolving ability as compared to competing techniques. The gimbaled mirror is used in the prototype system to increase the working volume of the instrument while maintaining high camera resolving ability. The prototype system uses a linear 2-D photodiode detector. Other researchers have developed 15 Chapter 1: Introduction tracking systems with less expensive detector devices, such as the quadrant detector, however these devices give lower camera resolutions and therefore provide poorer estimates of the target position when the target is not imaged on the detector's center [14]. The geometry of the proposed system requires the use of two target tracking stations which lock onto the target independently. This loose coupling between individual stations allows for a greater target depth variability than techniques where each station is scanned in a simultaneous fashion, such as with the double-sided mirror scanner technique (Figure 1.2). Simultaneous scanning of both target tracking stations implies that the range of motion of one target tracking station is constrained with respect to the other target tracking station therefore the working volume is decreased. The developed system is relatively insensitive to lens aberrations because the light ray is always directed towards the centre of the detector. Consequendy, light rays travel along the optical axis of the lens where there is the least distortion. The capability of the instrument to resolve range is limited primarily by the resolution of the angular position sensors. For example, consider the planer system illustrated in Figure 1.7. If the observation stations can determine the angles a and <f> to within ±Aa = ± A<f> degrees then the resulting variation in range to the target is given by z = y tan (a ± Aa) = y tan (<p ± &<p). Variations in range resolving capability can be addressed with an appropriate selection of mirror angular position sensor, making the technique adaptable to the required application. 16 Chapter 1: Introduction i zAxis Observation Station 1 ^ Station 2 Observation BaseLine Distance B Figure 1.7: Planer Ranging To Target The camera uses a position sensitive photodiode detector which produces signals which can be readily interpreted to obtain the position of the imaged target This inherent processing of control system inputs simplifies the control algorithm as compared to traditional camera techniques. Each target tracking station can be a stand-alone unit containing its own distributed control unit. Unit uniformity and modularity along with the use of standard technology means that units can be produced for relatively low cost. The camera instrumentation was constructed for a cost of approximately $600 not including the cost of the detector. The aforementioned design features of the developed instrument make it applicable to a wide variety of applications which have similar needs as the intended application. Limitations of the proposed method are: 1. the principle of operation is active triangulation therefore the technique is susceptible to the missing parts problem; 2. an active target must be fixed to the manipulator endpoint; and 3. the beamwidth of the target source limits the working volume of the device. Modular units allow for the use of multiple stations to reduce the probability of encountering the missing parts problem. The use of multiple target tracking stations will also expand the working volume of the system and can also be used to improve the accuracy of the system [22]. The use of 17 Chapter 1: Introduction an active target avoids the correspondence problem of passive triangulation systems. System design assumes that the target is a modulated point source, however, in reality the beamwidth of the source device used in the prototype is limited. This limitation can be minimized by using a target device which emits light over a wide beamwidth. The system configuration is flexible depending on the required application. Point ranging and tracking applications require a configuration where an active target is affixed to the point of interest and where each of the target tracking stations act as energy receivers which lock onto the target. Range imaging of scenes requires a modified configuration where a directed active source replaces the active target and one of the target tracking station receivers. The range imaging configuration does not provide for tracking of specific points in the scene over time. Most surfaces are diffuse reflectors at infrared frequencies, and therefore the use of an active L E D target in the prototype provides a suitable model for the extension of the concept to range imaging applications. Models have been developed for the intensity distribution of the bright spot created by shining a light beam onto a rough surface [5]. The use of the instrument in the range imaging configuration has the following limitations: 1. the reflected light spot is assumed to be a point source while in fact the reflection of a narrow light beam off a rough angled surface may be amorphous and therefore has an arbitrary centre; 2. the received light intensity is inversely proportional to the distance to the object raised to the fourth power as opposed to the distance to the object squared for an unreflected target; and 3. the received light intensity is dependent on the reflection coefficient of the reflecting surface. The range imaging configuration will provide more robust sensing of angled walls than sonic triangulation or radar methods because infrared light is diffusely reflected while sound waves are specularly reflected. The remainder of this thesis is organized into chapters detailing: 1. the camera system, including the design philosophy and the results of static and dynamic tests (Chapter 2); 18 Chapter 1: Introduction 2. the mirror optic deflection system, including mirror driver control modelling and system step responses (Chapter 3); 3. the tracking system, which combines the camera and mirror optic deflection systems into a single instrument (Chapter 4). This chapter includes information on the tracking instruments optical model, control philosophy and the results of dynamic and static tests performed on the instrument and; 4. conclusions related to the development of the system (Chapter 5). 19 Chapter 2: Camera System Chapter 2 Camera System The optical subsystem provides information on the position of an infrared point source imaged on a two-dimensional detector which is infrared sensitive. The optical subsystem is comprised of a modulated infrared source, which is defined here as the target, a lens system, a two-dimensional infrared detector, signal processing instrumentation and a processor system with A /D and D/A capability. The positional information obtained from the optical subsystem is used to servo a gimbaled mirror such that the image of the infrared target remains centered on the two-dimensional infrared detector, which is also known as the position sensitive detector (PSD). The PSD and related hardware wil l be referred to here as the receiver subsystem. This chapter oudines: 1. information on the hardware design; and 2. the results of tests performed on the optical subsystem. First, an explanation of the design strategy applied to the receiver and target subsystems is presented. This is followed by a discussion of the results obtained when static and dynamic tests were performed on the instrument. 2.1 Receiver Subsystem The receiver subsystem is made up of the functional blocks illustrated in Figure 2.8 20 Chapter 2: Camera System Lens -a» Position Sensitive Detector PreAmp ind Tuning Carrier Recovery Phase Sensitive DekxloB Variant Gain Stages Gain Control . Gain Select— Y+ —»• Y-—«*• X+ Sun And Difference Amplifies . SmnY 1 . Difference Y • SomX — i • Difference X|». •Y+ »• • Y- •> •X+ »-•X- »> Figure 2.8: Receiver Subsystem Block Diagram Processor Digital To Analog Hardwire To Dig Hardware The blocks perform the following functions: 1. Lens System — focuses the image of the target on the PSD. 2. Position Sensitive Detector (PSD) — produces a set of analog signals which may be processed to obtain the position of the target on the PSD surface. 3. Preamplification and Biasing Stage — provides amplification of the PSD signals and the necessary biasing of the PSD device. 4. Phase Sensitive Detectors — demodulates and filters one of the four output signals from the PSD. Timing signals for this stage are derived from the carrier recovery timing block 5. Carrier Recovery Timing — synchronizes timing signals to the incoming signal. 6. Variable Gain Stages And Gain Control — provides dynamically selectable gain on each channel. 7. Sum and Difference Stages — generates the sum and difference of each axial pair of signals (y-axis and x-axis). 8. Processor — multichannel A /D converts signals derived from the PSD and associated instru-mentation to digital format for further processing to determine target position. The D/A board produces signals to command the gain control section of the instrument. 21 Chapter 2: Camera System 2.1.1 Lens System The lens used to focus the target image onto the position sensitive detector is a Kern-Paillard C-mount 75mm lens with relative aperture 1:2.8. This lens can be focused from 5 feet to co and has a manually adjustable aperture which can be adjusted from 2.8 to 22. This lens has an angular field of 4.95 degrees measured from the optical axis or a total angle of 9.9 degrees. The total angle, or angle of view, is normally defined as the angle subtended by the diagonal of the standard format picture for which the lens was produced. In this case, the format area of the detector is 13 mm x 13 mm therefore the total angle is approximately 14 degrees. This analysis assumes the lens is focused at infinity, however the angle of view for focused distances other than infinity may be found by performing the calculations using the angle of view for a lens of focal length equal to the lens-image distance. 2.1.2 Position Sensitive Detector The position sensitive detector (PSD) is an optical transducer which can be used to obtain the position of a point of light on its surface. The detector works on the same principle as a photodiode, however, positional data can be derived due to the geometry of the device. In the reverse-biased pn junction photodiode the following processes occur when light is absorbed by the device: 1. as light is absorbed there is a certain probability of exciting an electron into the conduction band; and 2. i f these additional charge carriers are produced near the junction in the depletion region they are drawn across the field and contribute to the reverse current. The result is that the device produces a linear change in reverse current in response to a proportional change in light intensity [23]. The geometry of a PSD capable of measuring the position of a light spot in one dimension is shown in Figure 2.9. 22 Chapter 2: Camera System i l photocurrent Incident Light ~p layer ~n layer lo @ x = 0 i l = i2 = 0.5Io Figure 2.9: One Dimensional Position Sensitive Detector[24] The semiconductor is reverse biased so that only dark current flows through the PSD with no incident light present. When a beam of light contacts the PSD, a charge proportional to the energy of the light is produced at the point of incidence. The charge is carried across the depletion region and is collected by the electrodes through the resistive P layer. The P layer acts as a current divider; if the resistivity of the P layer is uniform, then the current collected by each electrode is inversely proportional to the distance between the point of incidence and the electrode [24]. If the axis of the PSD is taken as the x-axis, and the zero position is taken as the centre point between the electrodes on the P layer, then the currents produced at each electrode are given by: and Combining 2.1 and 2.2 yields the position of the point of light as: X ( / 2 - J i ) x = ( J 1 - J 2 ) (2.1) (2.2) (2.3) The above analysis is based on the assumption that light contacts the PSD at a point. In practice, the light spot is diffused over an area and the current produced in each electrode will 23 Chapter 2: Camera System be the summation of the point source currents generated over the active region. This results in a positional reading representative of the averaged position or centroid of the diffused light spot over the surface. Even if the light spot is not sharply focused on the PSD, the centre of the light spot wil l be accurately transduced as long as the entire spot falls on the PSD surface and has a symmetric energy distribution about the centre of incidence. Another factor which complicates the derivation of the positional information is the existence of infrared sources in the environment other than the target. For example, both the sun and fluorescent lights emit radiation in the infrared band which could interfere with light emitted from the active target. To overcome this problem, a detection scheme that removes the contribution of background light has been selected. This scheme requires that the infrared target be pulsed on and off so that the background reading may be subtracted from the current reading made when the target is active or modulated "on". The exact process will be explained further in the section describing the phase sensitive detector and timing block. Widening the photodiode semiconductor to a square geometry and stretching electrode connec-tions into electrode plates results in a PSD geometry which is able to determine the position, in two dimensions, of an incident light spot. The duolateral type PSD used in the optical subsystem (Ham-mamatsu model 1300) has a second pair of electrode plates perpendicular to the P layer electrodes on the opposite side of the junction (Figure 2.10). Figure 2.10: Two Dimensional Position Sensitive Detector The two-dimensional PSD is reverse biased by connecting equal positive voltage sources to the underside pair of electrodes. The second set of electrode plates not only provide bias to the device but 24 Chapter 2: Camera System also carry the positional information related to the orthogonal or y direction. The revised positional equations based on a point light source and uniform resistivity of the semiconductor layers are: and „ - r (/2 ~ n) y - (777/2) ( 2- 5 ) Biasing of the PSD has a large effect on the junction capacitance of the device. By reducing the junction capacitance the transfer characteristics of the detector may be improved so that the device may operate at a higher frequency. This is an important consideration because the detector must respond to a pulsed source. In addition to this, higher frequencies of source modulation result in greater attenuation to noise sources such as 1/f noise and background fluorescent lights which flicker at 60 Hz. The PSD junction capacitance versus reverse voltage characteristic is shown in Fig 2.11. Figure 2.11: PSD Junction Capacitance vs. Bias Vbltage[24] 25 Chapter 2: Camera System Increasing the bias voltage also limits the dynamic range of the channel preamplifiers, which amplify the signals from the PSD. In considering the relative importance of each of these factors, a bias voltage of 2.0 volts was selected. 2.13 Instrumentation The instrumentation of the optical subsystem is required to amplify the PSD signals, filter out noise signals (i.e. background light, 1/f noise, power system hum etc.), and further condition the signal so that it is suitable for analog to digital conversion by the processor system. The major subdivisions of the optical subsystem instrumentation and their functions are presented in the following subsections. Low Noise Preamplification And Biasing Stages The first stage following the detector biases the PSD and individually amplifies the four PSD channel signals. The circuitry selected was based on the D C coupled design suggested by the PSD 's manufacturer [24], (Figure 2.12). 26 Chapter 2: Camera System Figure 2.12: Simplified Front End The duolateral PSD does not have individual bias and signal connections, and therefore the circuitry responsible for these functions was combined in the front end stages. An equal bias voltage is applied to the PSD rear-face electrodes through op amps three and four. The currents generated by the PSD when the device is activated by a point of incident light, are in the direction shown in Figure 2.12. The combination of the applied bias voltages and active device currents result in the indicated outputs. The idealized response waveforms of the preamp and biasing stage are shown in Figure 2.13. The relative magnitude of the outputs (Y+.Y-.X+ and X-) are indicative of a target imaged in the third quadrant of the position sensitive detector (Figure 2.12). 27 Chapter 2: Camera System On Pulsed Target Q f f 0 Y+ t Background Level Target + Background Level t X+ Background Level Target + Background Level Target + Background Level Background Level X-Target + Background Level Background Level Figure 2.13: First Stage of Optical Subsystem: Idealized Waveforms These waveforms are based on the following assumptions: 1. a steady background light level; 2. pulsed source frequency well below system bandwidth; and 3. a stationary source. It should be noted that these idealized waveforms are displayed free of noise. In the actual system the waveforms have noise components (white noise, PSD 1/f noise, noise due to changing background, power hum, and noise from other sources) superimposed on the underlying fundamental squarewave pattern. The edges of the squarewaves are also rounded off due to the bandwidth limitations of the system. The front end amplifiers were selected based on their low equivalent input noise, high input impedance, and appropriate gain bandwidth product. Components not shown in the simplified stage attenuate high frequency noise, and compensate for op amp input bias current. 28 Chapter 2: Camera System Phase Sensitive Detector And Timing Circuitry The problem of detection is one of magnitude determinatioa The difference between individual position sensitive detector output levels, as the target is modulated on and off, provides the necessary information to determine the detector signal due to the target alone. The position sensitive detector relates information on the centroid of the light hitting its surface, due to the target and the background light when the target is on or due to only the background light when the target is off. The target subsystem is pulsed by a squarewave of constant frequency, and the PSD and front end stages have a specified bandwidth. Therefore, the input signal is of well known shape and timing. Detection of signals of this class have been vigorously studied and can be detected by matched filtering using phase sensitive detector techniques. The structure of the phase sensitive detector has advantages over other matched filtering realizations because of its inherent simplicity of design. While standard matched filtering techniques optimally detect signals buried in white noise the phase sensitive detector realization also has the inherent ability to reduce 1/f noise error, offset error and drift error [25]. The four preamplifier signals representing the x and y positional data must be recovered by the four individual channel phase sensitive detectors. Each phase sensitive detector is made up of two boxcar sampling gates [25] followed by an instnimentation amplifier (Figure 2.14). 29 Chapter 2: Camera System P(0-applied k(t) Boxcar Sample Gate Boxcar Sample Gate Timing Recovery Block vl f(t) Instrumentation Amplifier _ v0(t) = v l f ( 0 " v2 f(t) v2 f(t) OR EQUTVALENTLY applied vl(t) Lowpass vl f(t) ) * Filter + Instrumentation Amplifier Lowpass 1—>• \ t ' v2(t) Filter v2 f(t) vQ(t) vl f(t) - v2 f(t) Figure 2.14: Phase Sensitive Detector The signal processing and noise reduction capabilities of the phase sensitive detector circuitry can be better understood by examining the frequency response of the circuit illustrated in Figure2.14. The boxcar sampling gate of the phase sensitive detector can be represented by a multiplier with one input chopped at levels of zero and unity, followed by a low pass filter. The following derivation gives the frequency response characteristics of the phase sensitive detector using the multiplier model for each boxcar gate [25]. Consider the derivation of the timing signals k(t) and k<t>(t) from the squarewave p(t). The Fourier series expansion of a squarewave, p(t), which is shown in Figure 2.15, is given by: 30 Chapter 2: Camera System m = n sin (wo) + ^ sin (Swot) + ^ sin (bwot). (2.6) P(t) -1 Figure 2.15: Timing Signals Derivation Waveform The timing or chopping signals k(t) and k<t>(t) of Figure 2.16, are fed into separate multipliers 5< of the phase sensitive detector model and are given by:k(t) = |(1 + p(t)) andk(j>(t) = |(1 - p(t)) k(t) 0 y o 0 Figure 2.16: Phase Sensitive Detector Timing Signals The Fourier series expansion of the timing signals, k(t) and k<t>(t), are then given by: k(t) = i + — ( sin ( t o o l ) + \ sin(Zwot) + ^ sm(bwot) 2 7r \ 3 5 + and (2.7) k*(t) = - — (sm(wot) + ^sm(3wot) + -jrsin(5u>ot) + ...) (2.8) Z 7T y o u / 31 Chapter 2: Camera System Now given a sinewave input, v(t) = Kpp/ted. applied to the phase sensitive detector, where v(t) = K sin (wat), of arbitrary magnitude K and frequency wa\ consider the case when K = 1 to avoid carrying a multiplying factor through the equations. The effect of applying a unit magnitude sinewave v(t) = sin (wat), to the phase sensitive detector is to produce the following outputs from the multipliers: vl(t) = v(t)k(t) 1 2 2 / 1 \ = - sin (tf at) H—(sin(wot) sin (wat)) H— I - sin (Swot) sin (wat)) + ... 2 7T 7T \ 3 / (2.9) and ^ sin(w;af) + —(COS ((Wo - Wa)t) - COS ((Wo + Wa)t)) 7T + —(COS ((3wo — Wa)t) — COS ((3wo + Wa)t)) 37T + —(COS ((bWo — Wa)t) — COs((bWo + Wa)t)) + 07T v2(t) = v(t)k,p(t) 1 2 2 / 1 = - s i n (wat) (sin(wot) s in (wat)) 1 - s i n (3w0t) s in (wat) 2 7T 7T \ 3 = ^ s i n (wat) 2 ! (2.10) — —(COS ((Wo — Wa)t) — COS ((iVo + Wa)t)) 7T - — (COS ((3Wo - Wa)t) - COS ( ( 3 W o + Wa)t)) — —(COS ((5Wo — Wa)t) — COs((hWo + Wa)t)) + . . . 07T Ignoring the effect of the lowpass niters for the present, the output of the phase sensitive detector, vo{t), is given by: Vo(t) = vi(t) - V2(t) 2 = — (cOs((Wo — Wa)t) — COS ((Wo + Wa)t)) 2 (2-11) + — (COS ((3lfo - Wa)t) - COS ((3Wo + Wa)t)) 3TT 2 + — ( C O S ((bWo - Wa)t) — COS ((bWo + Wa)t)) + . . . 32 Chapter 2: Camera System The preceding analysis indicates that i f a sinewave of frequency wa is input into the phase sensitive detector, then the output wil l be composed of sinusoidal signals of frequency w0 ± wa, 3w0 ± wa, bw0 ± wa, with relative strengths of 1, 1/3, 1/5 etc. For example, if a input sinusoid of unit magnitude and of frequency 1 kHz, is input into the phase sensitive detector, operating at a frequency of 3 kHz, then the output signal from the phase sensitive detector, due to chopper modulation and the instrumentation amplifier alone, will be: Vo(t) = -(COS(2TT2000(0) - cos (2TT4000(I))) + ^- (cos (27r8000(f)) - cos (2*10000(0)) <2-12) + T-(COS (27rll000(f)) - cos (27Tl3000(t.))) ... Now consider the effect of a low pass filter section with a cutoff frequency, w/, which is much smaller than the timing or chopper frequency, wo. Only the output components from the phase sensitive detector which correspond to frequencies less than the lowpass cutoff frequency are permitted to pass unattenuated. The input components which meet this criteria are evident from the expression for vo(t), and are within the frequency bands: wo ± wf or 3wo ± wf or 5wo ± wf etc. Consider again the preceding example, where a sinusoidal signal of frequency 1 kHz is input into the phase sensitive detector with a chopper frequency of 3 kHz. If a lowpass filter of frequency 1.2 kHz is used in the phase sensitive detector, then the output signal will be attenuated as it contains no frequency components within the lowpass filter passband, even though the input signal is of a frequency less than the lowpass filter cutoff frequency. If the input sinusoid frequency is increased until it is within ±wj of the chopper frequency or one of its odd harmonics, then the signal created as a result of the choppers and instrumentation amplifier will contain components which are of frequency less than wf (Equation 2.12). This signal is within the passband of the low pass filter and is passed or "accepted" by the phase sensitive detector. The input acceptance pattern of the phase sensitive detector relates the relative magnitude of the output signal to the frequency of the input signal (Figure 2.17) [25]. From the expression for vo(t), the relative acceptance strengths for input frequencies of wo, Zwo, 5wo, etc. are 1, 1/3, 1/5 etc. which correspond to the weights of the spectral components 33 Chapter 2: Camera System A Relative Magnitude A i Of The Output 1/5 w. a 0 w o 5w 0 Input Frequency Figure 2.17: Acceptance Strengths of the Phase Sensitive Detector as a Function of the Input Frequency of a squarewave of frequency wa = wo. Input frequencies outside ±wj from the fundamental frequencies wo, 3w<>, bwo, etc. are attenuated due to the low pass filtering effects. The ability of the phase sensitive detector to greatly attenuate 1/f noise is apparent from the preceding analysis. The term 1/f noise refers to the 1/f noise spectral distribution which tends to decrease with increasing frequency. Therefore, 1/f noise dominates white noise at low frequencies. Consider the consequences of inputting any low-frequency input (@ freq wn) into the phase sensitive detector. First, the multipliers modulate the applied signal up to wo ± wn. This signal is then filtered by the lowpass filter with cutoff wj. The phase sensitive detector allows filtering of low frequency noise signals with a multiplier followed by a low pass filter, by modulating all low frequency inputs so that a frequency translation to a frequency above the lowpass filter cutoff occurs. Using this same analysis it is apparent that errors due to offset and drift are also reduced using this method of detection. The preceding system of detection was modified for application because the position sensitive detector response is of limited bandwidth and is incapable of precisely reproducing a rectangular signal. Prefiltering prior to the phase sensitive detector also limits the white noise spectral bandwidth but also increases the rise and fall times of the pulsed input signal. For this reason the sampling time is limited to approximately 1/2 the pulsed-on time or equivalently 1/2 the pulsed-off time of the input signal to avoid the detection of transition states. The timing pulses for the boxcar sampling gates are generated by the carrier recovery timing 34 Chapter 2: Camera System circuitry. The carrier recovery and timing circuitry phase locks to a preamplified channel input thereby producing a squarewave phase shifted 90 degrees from the input signal. This signal in turn is used to trigger two one-shot circuits on the falling and rising edge of the phase locked loop output. The phase of the timing pulses relative to the input signal are adjusted by detuning the input frequency slightly off the phase locked loop centre frequency. This allows the user to align the timing pulses phase relative to the input to be detected, so that transitions do not occur during the sampling period. Variable Gain Stages And Gain Control Once the signal is demodulated it is amplified in order to better match the dynamic range of the A/D converter on the processor board ( ±5 volts). Each channel amplifier has a processor-selectable gain of approximately 2, 10, 85, or 375. This wide range of gains is provided to compensate for changing input levels. This is required since light intensity varies inversely with the square of the distance of the target Sum And Difference Stages These stages perform the sum and difference of each pair of channel signals in order to obtain the numerator and divisor of the positional equations 2.4. The position of the light spot is given by: y = K^- = K ^ ~ y - \ (2.13) and X = K ^ = K ( ^ ± 1 (2.14) where K is a constant 2.1.4 Processor The processor digitizes the eight optical subsystem outputs, y+,y-,x+, x-,Ay, ^2y,Ax, and 2 x, with the multichannel A/D converter and controls channel gain selection with the multichannel D/A board. The A/D board provides 12 bit, 100kHz sampling on up to 16 multiplexed differential 35 Chapter 2: Camera System input channels. The D/A board uses two channels, known as the command and latch outputs, to control the receiver subsystems gain selection. The latch output is used latch the command signal into the receiver subsystem. The command output can have one of three appropriate levels, which are: 1. load default gain (highest level), 2. decrease gain a single level, and 3. increase gain a single level. The A / D and D/A boards also provide sensing and control for other components of the target tracking system. Two A/D and two D/A channels are used for motor control in order to drive two mirror scanner motors; control signals are sent via the D/A and resolver positions are read via the A/D. 2.2 Target Subsystem The target subsystem consists of an infrared pulsed light source which is modulated at approx-imately 4 kHz. The target subsystem is completely independent of the receiving subsystem. The receiver must derive system timing by locking to the incoming signal thereby recovering the carrier frequency. The duty cycle, frequency, and power output of the target subsystem is independently adjustable using the target subsystem circuitry (Fig. 2.18). Standalone Infrared Target System Oscillator Duty Cycle Adjust Frequency Adjust Variable Power Mosfet Driver Infrared LED Power Supply Figure 2.18: Block Diagram Of Target Subsystem Hardware The light source is a single GaAs infrared emitting diode, TRW type OP140SLA. This product has the following characteristics: [26] P A R A M E T E R MIN. TYP. UNITS TEST COND. Emission Angle @ 1/2 Pwr. Pts 40 degrees IF = 20 ma Aperatured Radient Incidence 0.4 mW/cm2 IF = 20 ma Table 2.2 Infrared LED Characteristics (Continued . . . ) 36 Chapter 2: Camera System Wavelength @ Peak Emission 930 nm IF = 20 ma Spectral Bandwidth @ 1/2 Pwr. Pts. 50 nm IF = 20 ma Output Rise Time 1550 ns IF(peak) = 20 ma PW = 10.0 us Output Fall Time 550 ns IF(peak) = 20ma, PW = 10.0 us Table 2.2 Infrared LED Characteristics Aperatured radiant incidence is the average light power density incident upon a sensing area 4.57 mm in diameter located orthogonal to the lens axis at a distance 16.6 mm from the lens tip. The OP140SLA device is of relatively low power output compared to competitive emitters, however, it is readily available, has a wide emission angle, and is of suitable wavelength at peak emission to match well with the detector. The receiver subsystem is capable of receiving signals from this target over the full emission angle at distances of 3 meters. The designer's intent is to use this target for the prototype development with the option of using a higher power source or an array of sources in order to track a target over larger distances or wider emission angles than are possible with the prototype system. The target hardware is designed to be completely independent of the receiver instrumentation so that no hardwire connections from the target to the receiver are necessary. 2.3 Camera Static Testing Camera static testing was performed in order to determine the imaging properties of the prototype camera system. The designed ranging and tracking instrument always attempts to position the imaged target on the centre of the position sensitive detector For this reason, the characteristics of the camera about the optical axis were emphasized in the static and dynamic tests. A l l of the camera static tests were performed using an optical testing stage produced by the Newport Corporation. This test bench consists of a micromanipulator, which can be moved in increments of 0.0002 of a millimeter over 25 millimeters, and a milled work surface, which has evenly spaced mounting holes for the micromanipulator. The test setup involved positioning the camera lens and micromanipulator so that their x-y planes were in alignment (Figure 2.19). The testing procedure involved moving the target, 37 Chapter 2: Camera System which was attached to the micromanipulator, in prescribed increments in the x-y plane. Camera output signals, indicating the position of the imaged target on the detector, were recorded for each target location. The target distance was selected so that target movements in the x or y direction of +-12.5 mm would result in the desired range of movement on the detector surface so that the micromanipulator base would not have to be moved for the duration of the test For all tests the focal plane was set to 5 feet and the aperture was set to 2.8. Three different resolutions of measurements were recorded. 1. Fine Camera Resolution — camera outputs were recorded as the target was moved in fine increments along either the x or y axis from the detector centre; 2. Central Area Mapping — a record of the detector mapping as the target is moved in a grid pattern close to the optical centre; and 3. Wide Area Grid Mapping — a record of the wide area detector mapping as the target is moved in a grid pattern so that image points further from the optical centre can be recorded. 38 Chapter 2: Camera System 2.3.1 Fine Camera Resolution The target was moved in increments of 0.01 millimeters, over a total distance of 0.4 millimeters, along first the x-axis and then the y-axis. The target distance was set to 60.4 cm and 59.0 cm for the x-axis and y-axis tests respectively. These target movements correspond to detector image movement of approximately 0.0012 millimeters on the position sensitive detector surface or approximately 1 part in 10000 of the detector's width. The results of the x-axis and y-axis fine resolution tests are shown in Figures 2.20 and 2.21, respectively. Expected PSD X Position (millimeters x 10 ) Figure 2.20: X-Axis Fine Resolution 39 Chapter 2: Camera System 2 •20.00 10.00 0.00 10.00 20.00 Expected PSD Y Position (millimeters x 10 ) Figure 2.21: Y-Axis Fine Resolution Although fine camera resolution is very high (0.0012 mm), it should be stressed that measure-ments were made under ideal laboratory conditions. For example, the background light level was low and of constant level, the measurements were performed sequentially over a small range close to the detector's centre, and the meters used to measure camera outputs effectively performed multiple time averaging of the output signals. 2.3.2 Central Area Gr id Mapping Central area grid mapping refers to a test where the target is moved in a grid pattern close to the optical axis and the corresponding image points are recorded. The target distance for this test was set to 60.4 cm and the target was moved in 5 mm increments over a range of +-10mm to a total of 25 separate locations (Figure 2.19). With this geometry, the expected movement of the target image point on the detector is approximately 0.62 mm for each target increment of 5 mm. The results of the central area grid mapping experiment are shown in Figure 2.22. The linearity of the camera 40 Chapter 2: Camera System 0.50-•g S ** -•• — -1.00-i I I -0.50 0.00 0.50 PSD X Coordinate (milimeters) Figure 2.22: Central Area Grid Map system close to the detector centre is apparent from the consistency of spacing between points on the central area grid map. The maximum excursion from a linear grid is 0.026 mm for x values and 0.007 mm for y values. 2.3.3 Wide Area Gr id Mapping As its name implies, wide area grid mapping performs the same tests as central area grid mapping over a larger area of the detector's surface. In this case the target distance is set at 14.9 cm and the target is moved in increments of 2 mm over a range of +-8 mm to a total of 81 separate locations. The results of the wide area grid mapping test are shown in Figure 2.23. Nonlinear effects introduced 41 Chapter 2: Camera System I T 2.00- - • 1-50- ^ - • - • _. • — ....#) — i 1 I.OO- : i 1 •>• ••• > 9 -#• # ^ > 0.50— ; i i i i ! 1 0.00—^ • i i - • •0.50- ; i i > a - - . . « ) — - 1 — - f - — > - • ; i •• • • • t « >-- • •2.00— : •- • i i t f-< • -2.30- : * i -• • - —•— - • — -• •—i »— •3.00 -2.00 -1.00 0.00 1.00 2.00 PSD X Coordinate (mfltimcien) Figure 2.23: Wide Area Grid Map by the lens, such as barrel distortion, are apparent in the wide area grid map. These results exemplify the utility of a controller which drives the image to the centre of the detector where there is the least optical distortion. A n estimate of the barrel distortion of the lens was calculated by performing a general linear least squares fit of the x and y axis data. The least squares fit was performed on the camera optics model defined by, xi for x axis model fitting, and Vi for y axis model fitting where x; and yi are the x and y axis linear camera models; Ox, K, Oy, and by are the model parameters for the x and y axis; r is the length of the radius to the point; and xmea3 and j/meaa are the x and y axis data recorded during the wide area map experiment. "i^meaa 4" bxXmeasT (2.15) axxmeaa ~\~ bxXmeas 0>y Vmeas ~T by J/meas r (2.16) ° y J/meas ~T by ymeas 42 Chapter 2: Camera System The x and y axis linear model data, x/ and yi, were calculated using measured points from the wide angle grid map, which were closest to the optical axis and also on the x and y axis. This data provided the expected linear step in the imaged target position with an incremental step movement of target along either the x or y axis. The linear model data was calculated by multiplying this increment by the number of steps that the target had moved from the optical centre. The least squares fit of the data to the denned model was performed using a singular value decomposition formulation to avoid the round off errors of other least squares fitting formulations [27]. The fitted parameters are: ax = 0.98, bx - 0.030, ay = 0.99, and by = 0.035. The chi-squared values for the least squared fit of the x-axis and the y-axis data are 0.012 and 0.015 respectively. 2.4 Camera Dynamic Testing Camera dynamic testing was performed using a specially built dual target assembly. This assembly had two target LEDs separated by 5.0 cm which could be alternately driven by a common oscillator. Selection of the active target L E D was made via an electronic switch which was activated by a signal from the processor system. The step response of the camera was determined by placing the target apparatus in the cameras field of view so that both target LEDs were within view. The active target was then electronically switched and the resulting step response of the position sensitive detector was recorded. The results of this experiment are shown in Figures 2.24,2.25 and 2.26. The 43 Chapter 2: Camera System Figure 2.24: Camera Step Response X Direction i 1.00-Figure 2.25: Camera Step Response Y Direction 44 Chapter 2: Camera System I 1.00-0.50-1 s a i u > Q •1.50-i : I i i • -2.00 -1.00 0.00 1.00 2.00 PSD X Coordinate (maimra) Figure 2.26: Step Response: Path Followed On Detector Surface camera dynamic response is limited by the cutoff frequency of the filter following the trace and hold circuit. This R C filter has a time constant of 1.5 milliseconds. The step response of the detector has a rise time of approximately 5 time constants or 7.5 milliseconds which is consistent with the time constant of the low pass filter used in the phase sensitive detector. A linear path is traced from the starting to end target position. 0.00--0.50--1.00-45 Chapter 3: Mirror Optic Deflection System Chapter 3 Mi r ro r Optic Deflection System 3.1 Functional Description The target tracking system uses a unique drive geometry mirror scanner. This system allows the user to scan a very large field of view. This scanner's unique drive configuration, known as the axially controlled mirror support or A . C . M . S . , was recently developed and patented by researchers at the University of British Columbia [20]. A prototype scanner system developed by the inventors was obtained for this project This system consisted of the gimbaled mirror, drive motors and angular resolvers. The drive electronics and controllers were not included with this system. The scanner consists of a single gimbaled mirror which is rotated about orthogonal axes by a pair of drive motors. The angular position of the minor is determined from the position of angle sensors located on the motors. The basic mechanics of the device are as shown in Figure 3.27. Cable (affixed to rear of mirror) Figure 3.27: Schematic of Mirror Scanner Gimbal The mirror gimbal rotates on the mutually orthogonal theta and beta axes. Theta rotation is facilitated by rotating the shaft connected to the gimbal forks. This shaft is directly coupled to the 46 Chapter 3: Mirror Optic Deflection System theta drive motor. Beta rotation about the mirror shaft axis is achieved by pulling or releasing a cable which is attached to the pre-sprung mirror mount. This cable must pass through the theta motor and theta resolver axis where it is attached to the oppositely pre-sprung beta motor drive shaft. The beta motor shaft and mirror shaft axis are sprung so that the cable is held taut throughout the beta angle range of rotation. Theta and beta angles are measured relative to the axes outlined in Figure 3.28 . Side View Front View 9 = 0 degrees (3=0 degrees Figure 3.28: Angular Reference Axes Theta rotation is about the zp axis and is measured relative to the yp axis. Beta rotation is about an axis which is created when the xp axis is rotated about the zp axis through theta degrees. Consequently, the beta reference axis is the yp axis rotated about the zp axis theta degrees. When theta rotation equals zero degrees, the beta reference axis is the yp axis. In this case beta rotation is measured from the yp axis and is about the xp axis as shown in Figure 3.28. The components that make up the mirror optic deflection system are shown in Figure 3.29. 47 Chapter 3: Mirror Optic Deflection System Processor D/A Channels A/D Channels D/A Channels Beta Motor Driver Theta Motor Driver Beta Axis Drive Motor Beta Motor Angle Sensing Theta Motor Angle Sensing Theta Axis Drive Motor Figure 3.29: Mirror Optic Deflection System Components Mirror Scanner The drive motors are a i f ton Precision TD-2150-A-1 D.C. toroidal motors with performance characteristics as outlined in Table 3.3. Excursion Angle +- 60 degrees maximum Peak Torque 13.0 oz-in minimum Continuous Torque 8.8 oz-in maximum Torque Sensitivity 5.5 oz-in/Amp +-10% Back E M F 0.0388 volts/rad/sec +-10% Resistance 3.7 ohms +-10% Motor Constant 2.86 oz-in/ warTl/2 nominal Table 3.3 Motor Performance Parameters Excursion angles extending beyond +-60 degrees up to +-80 degrees are attainable, however, motor sensitivity decreases rapidly beyond the +-60 degree positions. This angular constraint is due to the motors used and is not a function of the scanner geometry itself. In order to remain below continuous torque specification, motor continuous currents must be limited to less than 1.6 amps which corresponds to a drive voltage of less than 5.9 volts for the given 48 Chapter 3: Mirror Optic Deflection System motor resistance. The motor drivers are National LM675 power drivers connected in a low-gain non-inverting configuration. Each of the motor drivers are driven from separate channels on the D/A board within the Ironies processing system. The scanner was able to achieve a field of view of 100 degrees in the beta scan space and of 140 degrees in the theta scan space. The scanning range of each of the axes are shown in Figure 3.30. The mirror must rotate 140 degrees (theta range +-70 degrees) on the zp axis to achieve 140 degrees of ray scanning, but only needs to rotate 50 degrees (beta angles from 20 to 70 degrees) on the xp axis to achieve 100 degrees of ray scanning. This results as a consequence of the specular reflection properties of the mirror's surface. The angle sensors used on the A.C.M.S. consist of two conductive plastic film potentiometers. In order to facilitate position sensing each potentiometer is connected to +-12 volts, and has a center tap which is connected to an A /D channel of the Ironies processor. The 12 bit A /D card resolves the full theta range of 140 degrees into 4096 bits or approximately 0.034 degrees per bit and the full beta range of 50 degrees into 2754 bits or approximately 0.018 degrees per bit. The angular sensors are therefore capable of discriminating optical ray resolutions of 0.034 degrees and 0.036 degrees in the theta and beta sweep directions respectively. Figure 3.30: Scanner Sweep Range 49 Chapter 3: Mirror Optic Deflection System 3.2 Control Motor Model The plant model for a D.C. motor driving a low inertia load is given by: 0(s) _ K V(s) ~ (s + a)s The derivation of this model follows from the motor model block diagram. Js B K v where : V = input voltage © = motor shaft angle K p a = power amp gain constant K 0 = motor constant R 0 = winding resistance J = armature moment of inertia B = armature viscous damping K„ = velocity constant (back e.m.f for plant alone) 50 Chapter 3: Mirror Optic Deflection System therefore K = (3.2) and a = (3.3) J in the original model equation. This model is appropriate for each motor and associated drive amplifier of the mirror optic deflection system if the following assumptions hold: 1. the mirror's contribution to the total moment of inertia for either drive axis is small 2. the beta motor spring may be ignored by applying compensation, and 3. the cross-coupling between the theta and beta drive axes is insignificant. 3.3 Beta Motor Spring Compensation The springs in the beta motor drive contribute an extra non-linear torque component to the motor model. If the torque contributed by the springs can be eliminated, by applying an equal but opposite counter torque, then the previously derived model is valid subject to the stated assumptions. The non-linear spring constant of the beta motor was experimentally determined by driving the beta motor to discrete angular positions followed by measurement of the required torque (voltage input) to keep the mirror in this position. The mirror was rotated sequentially over the full beta scan range. This data was collected travelling in both the clockwise and counter clockwise directions, then averaged and smoothed. The resulting mapping which relates the required input voltage to compensate for the torque generated by the spring to beta angle was used to compensate for the springs in the system. The non-linear compensation scheme works on a table lookup basis. Compensation is performed by adding the appropriate compensation voltage to the control input signal, given the present beta angle, in order to cancel out the torque produced by the springs. A graph showing this mapping is displayed in Figure 3.31. 51 Chapter 3: Mirror Optic Deflection System 3.4 Experimentally Determined Motor Parameters The model parameters, K and a, of the previously defined motor model, can be determined for each of the drive axes, using a simple input-output technique [28]. The response of either motor drive system to a step of size Em is given by: e « - <"> If 0(t) = 0, @ t - 0 then the inverse Laplace transform of the response of the plant to a step is given by: m = ^rM* - (1 - « " " ) } (3.5) The contribution of the exponential component rapidly becomes insignificant and the motor angular velocity is constant after a transient period. Therefore, after the transient period: 9(t) « ^-{at - 1} (3.6) If this equation is subtracted from the line defined by = KJllL (3.7) 52 Chapter 3: Mirror Optic Deflection System then the resulting change in angular position is given by: &e = ^ ( 3 . 8 ) The angular velocity of the motor after the transient period is given by: de = EmK dt a (3.9) The last two equations 3.8 and 3.9 provide enough information to determine the model parameters K and a if the output response of the plant is known for a given step input. The step responses of the theta and beta drive systems are shown in Figures 3.32 and 3.33 respectively. i i i i i i i i i i i i 000 10.00 20.00 30.00 40.00 50.00 Time (mica) Figure 3.32: Theta Drive Axis: Plant Step Response 53 Chapter 3: Mirror Optic Deflection System wo. oo— i Time (iraoa) Figure 3.33: Beta Drive Axis: Plant Step Response The described method was used to determine the model parameters for each of the drive axes. The resulting constants were found to be : Theta Axis: Ke = 561.1 R a v d £ n 3 Theta Axis: ae = 50 Beta Axis: Kp = 1388 2$jf± Beta Axis: ap = 200 3.5 Motor PID Controllers and Motor Step Response PID controllers were selected to drive the mirror scanner motors in the prototype system because of their appropriateness to the control of single input single output linear systems. Specifically, this class of controllers permits tuning of the natural frequency and damping factor of linear second order systems through selection of the PID parameter values. The crucial features of a discrete PID controller wil l be presented in the following discussion, however a more thorough development may be found in the given references [28], [29]. The PID controller output, ei(nT), for a given error signal input, ei(nT), is given by: e2(nT) = K„ei(nT) + JT .T £ e, (JbT) + ^\ti{nT) - e,((n - 1)T)) (3.10) fc=i 54 Chapter 3: Mirror Optic Deflection System where Kp, Ki and Kd are the proportional, integral and derivative controller parameters, and T is the sampling time. The integral component was included in the implemented controller in order to eliminate steady state errors from the target tracking station. In practice, limits were set on the active range of the integral component and the maximum accumulated error in order to avoid system windup problems. Stable controller parameters were selected using the plant parameters determined in Section 3.4 and Aylors stability curves [28]. The control parameters were then experimentally tuned to give as fast a response as possible with little overshoot. The step responses of the motors for controller parameters as defined in Table 4.5 are shown in Figures 3.34 and 3.35. The theta step response has a 10%-90% rise time of 30 milliseconds and a maximum overshoot of approximately 2%. The beta step response has a 10%-90% rise time of 14 milliseconds and a maximum overshoot of approximately 2%. 30.00— 20.00 — ,| 10.00— Jl M ^ 0.00 — J -10.00 — -20.00 — •30.00- | | 1 I | -0.00 5000 10000 150.00 200.00 Time (milliseconds) Figure 3.34: Theta Axis Step Response 55 Chapter 3: Mirror Optic Deflection System i i i ' 0.00 50.00 100.00 150.00 XI me (miliseconds) Figure 3.35: Beta Axis Step Response 56 Chapter 4: Tracking System Modelling and Control Chapter 4 Tracking System Modelling and Control The control strategy applied to the target tracking system is displayed in Figure 4.36. Production of the signals, ^ y, Ay, ^ x and A x , by the camera detector and associated signal processing instrumentation is presented in Chapter 2. These signals are input into the processor and are used to determine the position ((x p s a-, ypad) = (xdet,ydet)) of the imaged target on the position sensitive detector's surface (Equations 2.13 and 2.14). The camera gain controller switches the camera gain based on present ]TJ y and £2 x m P u t levels. The mirror optic deflection system and associated drive axes PID controllers are described in Chapter 3. The remaining blocks of the control strategy diagram describe an algorithm which determines the desired theta and beta angles, based on present theta and beta angles and the imaged target position, in order to move the imaged target towards the centre of the camera detector. This algorithm was created after observing the mapping of the imaged target on the detector surface for incremental changes in mirror theta and beta angles. Mapping was performed by modeling the optics of the target tracking system. Convergence of the algorithm was confirmed in model simulation before being implemented on the developed system. The following sections describe: 1. the camera gain controller, 2. the camera model (mathematical model and examples of mapping of the imaged target onto the detector surface); 3. the results of static testing performed on the target tracking station; and 4. the results of dynamic testing performed on the target tracking station. 57 Chapter 4: Tracking System Modelling and Control mode elk Camera Gain Controller P Angle Sensing Camera Signal Processing Instrumentation Camera Detector £x Ax Camera Input And Target Image Point Calculation 'psd vpsd Rotation Of 8 degrees xrot = Ax yrot A6 56 8x Ax AP «P 8y Ay A8 AP ^des P+ Ap des 8 + A8 Jdes P PID Controller 'des 8 PID Controller Gimbaled Mirror Optic Deflection System 8 Angle Sensing Figure 4.36: Target Tracking System Control Strategy 4.1 Camera Gain Controller The camera gain controller switches the camera gain based on the present £ y and Y x input 58 Chapter 4: Tracking System Modelling and Control levels. If the absolute value of both of these signals is lower than the upswitch threshold the gain of the camera is stepped up. If the absolute value of both of these signals is above the downswitch threshold the gain of the camera is stepped dowa The threshold levels are software selectable and were set to 1 volt (upswitch threshold) and 4.8 volts (downswitch threshold) in the prototype system. The camera gains are dynamically switched to compensate for variations in detected light intensity due to movement of the target relative to the target tracking station or due to a change in orientation of the target. Target orientation changes cause changes in the detected light intensity because target sources are directional and are not perfect point light sources. 4.2 Camera Model The strategy applied to the modelling of the system optics is to determine the position and orientation of the virtual camera which is created due to the mirror in the optical system. The position of the imaged target on the virtual cameras position sensitive detector is identical to the imaged target position on the real camera detector. The camera model must be determined for simulation of control models and estimation of control parameters in the working system. The camera model section is divided into two parts: 1. development of the mathematical model of the tracking system optics and 2. target image movement about a set point. Target image movement about a set point refers to the mapping created when a target is imaged onto the detector surface for incremental changes in theta and beta. This mapping was used in the development of the algorithm to drive the imaged target to the center of the detector. 4.2.1 Mathematical Model Of The Tracking System Optics The optical system can be modeled using common vector analysis techniques. The model parameters are as shown in Fig 4.37. 59 Chapter 4: Tracking System Modelling and Control ^Posr[x r y r z r ] \ Target j \ i \ : ' Detector Figure 4.37: Optical System Model: Graphic Representation Comprehension of the indicated geometry may be aided by referring to the schematic of the mirror gimbal (Figure 3.27) and the angular reference axes (Figure 3.28). The end result of this analysis is to determine the light spot position on the position sensitive detector surface given the target coordinates and mirror rotation angles beta and theta for fixed parameter values dos, dp, F, and b. The fixed parameter values are defined as follows: 1. dos is the offset distance between the mirror and a parallel plane which contains origin of the (xp,yp,Zp) coordinate system; 2. dp is the distance in the real camera z direction from the reference coordinate system (xa, y0, z0), to the mirror rotation coordinate system (xp, yp, zp); 3. F is the focal length of the lens; and 60 Chapter 4: Tracking System Modelling and Control 4. 6 is the width of the position sensitive detector. The analytical method used, determines the position (VirOrigin) and orientation (x^,, yv, zv) of the coordinate system of a virtual camera system, which is created due to the mirror (Figure 4.37). The light spot position on the virtual camera corresponds directly to the light spot position on the real camera. The mirror system rotates: 1. beta degrees about an axis created when the xp axis is rotated through theta degrees about the zP axis and 2. theta degrees about the zP axis (Fig 4.38). If further clarification is needed regarding the definition of rotation axes see Figure 3.28. The mirror is offset by a distance dos from the yP axis when beta is zero radians. The mirror rotation coordinate frame, xp, yP, zp is translated a distance dp along the zo axis from the reference coordinate system xo, yo, zo. The reference coordinate system, So, yo, zo is located at the rear nodal point of the lens with the zo axis on the lens system optical axis pointing in the direction of light exiting the lens. The position sensitive detector is positioned on the focal plane of the lens, centered at the point (0,0, -F) . The position sensitive detector is of dimension 6 x 6 . The target light is located at P Rotation Shown For 6=0 degrees Figure 4.38: Mirror Rotation Angle Definition 61 Chapter 4: Tracking System Modelling and Control the point POST = [xr, j/r, zr]. The first portion of analysis concentrates on the determination of the primary reflection vector fo which is the reflection of the incident vector io = [0, 0, 1]. The primary reflection vector, fo, is in the direction of the optical axis, zv of the virtual camera system. The mirror normal vector, nm, is the same as the normal to the plane of rotation. The plane of rotation is a parallel plane to the mirror plane which passes through the origin of the xP, yP, zP coordinate frame. The plane of rotation becomes the mirror plane for the condition dos = 0. The mirror normal vector can be derived by considering subsequent rotations of the vector [0,0,-1] which correspond to the mirror normal vector when 9 and (3 are equal to zero. Mirror rotation of (3 degrees about an axis corresponding to the x p axis rotated through 6 degrees results in a normal vector described by: 0 • i 0 o • • o • sin(/?) = 0 cos (/?) - s i n (/?) 0 . - c o s (/?). .0 sin(/?) cos(P) . . -1 . in terms of the xP,yP,zP coordinate frame rotated 0 degrees about the zp axis. In terms of the xp, yP, Zp coordinate frame the normal vector is therefore given by: •-sin(0)sin(/?) - -cos(0) -sin(0) 0" 0 cos (0)sin(/?) = sin (0) cos (0) 0 sm((3) -cos( /?) . 0 0 1. . -cos( /?)_ If the length of the vector is modified so that the length of the z component is equal to —1 then the mirror normal vector varies with 0 and /3 as: n m = [- sin (0) tan (/?), cos (0) tan (/?), -1] (4.3) The zo axis intersects the mirror at the point Pmir = [0,0,-Kl], where Kl = dp H — cos(/?)-The geometry of this calculation can be seen more easily in 4.39 62 Chapter 4: Tracking System Modelling and Control -Plane of Rotation Mirror Plane a = d o 5 / c o s ( B ) Figure 4.39: Determination of Mirror and Z-Axis Intersection Point If a new orthogonal basis called S = [x*m, ym, zm] is established, where Zm — Tim and x m and ym are in the mirror plane. The vector x m can be constructed along the line which is the intersection of the mirror plane and the plane xo = 0. For the described system x*m is then given by: x m = [0, 1, cos (0) tan (/?)] (4.4) The vector ym can then be found by finding the cross-product: ym = Zm X X m (4.5) = [cos2 (0) tan 2 (fi) + 1, sin (0) cos (0) tan 2 (/3), - sin (0) tan (/?)] The reflection vector, Tvector, of any incident vector, tvector, described in terms of the reference system is then given by the transformation: fvectorT = S 1 0 0 1 0 0 lvector T (4.6) H = ^11) 0 0 -1 = H ivector1" sin 2 (/3) sin (20), - sin (0) sin (2(3) sin 2 (/3)sin (20), - 2 cos 2 (0) s in 2 (/?) + 1, cos (0) sin (2/3) - sin(9)sin(2/3), cos(0)sin(2/3), -cos(2/3) / i U = 2 cos2 (0) - 2 cos2 (0) cos2 (/3) + 2 cos 2 (/?) - 1 (4.7) 63 Chapter 4: Tracking System Modelling and Control The primary reflection vector ro which is the reflection of the primary incident vector io= [0, 0, 1] can now be determined: "0" TV = H 0 (4.8) fo = [- sin(0)sin(2/3), cos(0)sin(2/3), -cos(2/3)] The primary reflection vector, fo, is in the direction of the optical axis, zv of the virtual camera system and is always of unit magnitude. There is now enough information to determine the virtual origin of the virtual camera system. The virtual origin of the virtual camera system is the geometrical equivalent of the real origin of the real camera system. The virtual origin is located at a point at a distance of - A ' l along fo from pmir = [0,0, Kl] where K l is the distance from the real camera coordinate frame to the mirror along the optical axis as illustrated in Figure 4.39. In equation form the virtual origin is given by: VirOrigin = Pmir — Kl fo (4.9) The centre of the virtual camera system, (virOriginj, and one of the virtual axes, zv, have now been determined. It remains to determine xv and yv to complete the virtual camera coordinate system. The intersection point of the mirror plane and the ray emanating from the position sensitive detector at the point xPsd = 6/2 and yPsd = 0 passing through the reference coordinate point (0,0,0) will be referred to as Pxb/2. The intersection point of the mirror plane and the ray emanating from the position sensitive detector at the point xPsd = 0 and yPsd = 6/2 passing through the point (0,0,0) wil l be referred to as Pyb/2. These intersection points are given by: 6 Kl „ Kl Pxb/2 = Pyb/2 = , 0, 2F ( l - £ sin (0) tan (/?))' ' ( l - ^ sin (0) tan (/3)) 6 Kl Kl ' 2F (1 + ^ cos (0) tan (/3))' ( l + ^ cos (0) tan (£)) (4.10) (4.11) 64 Chapter 4: Tracking System Modelling and Control The perpendicular distance from these intersection points back to the reference coordinate system zo = 0 plane is given by the third elements of Pxb/2, and Pyb/2. The vector obtained by moving from the point Pxb/2 along a ray parallel to the optical axis towards the lens a distance corresponding to the third element of Pxb/2, must be in the -x0 direction. Xdir = Pxb/2 - Pxb/2{Z) [0,0, 1] , 0 , 0 Kl (4.12) 2F ( 1 - £ sin (0) tan (/3)) The vector obtained by moving from the point Pyb/2 along a ray parallel to the optical axis towards the lens a distance corresponding to the third element of Pyb/2, must be in the - yo direction. ydir = Pyb/2 - Py6/2(3) [0,0,1] 0, -Kl 2F (1 +A,cos(0)tan(/3)) , 0 (4.13) The same geometry applies to the virtual camera. The vectors obtained by moving from the points Pxb/2 and Pyt/2along a ray parallel to the virtual optical axis towards the lens a distance corresponding to the third element of Pxb/2 and Pyb/2 respectively, must be the endpoints of vectors in the — xv, and — y*v directions. If these vectors are designated xendpoint, and yendpoint respectively, then applying the same analysis to the virtual camera gives: and xendpoint = Pxb/2 - Pxb/2(3) fo yendpoint — Pyb/2 — Pyt/2(3) fo (4.14) (4.15) The origin of the virtual camera system is at VirOrigin, therefore vectors in the direction of the virtual coordinate axes xv and yv respectively may be found using: and xvdir = VirOrigin — xendpoint yvdw = VirOrigin - yendpoint (4.16) (4.17) 65 Chapter 4: Tracking System Modelling and Control The vectors xv and yv are the unit vector equivalents of xvdir and yvdir. _ Xvdir _ Vvdir \\Xvdir\\ lljM'HI The virtual camera coordinate system is now complete with coordinate axes [xv, yv, zv] and or ig inFirOr ic /m relative to the reference coordinate system. To determine the position of the target relative to the virtual camera system the following transformations are necessary: 1. Translation of the real target position to the virtual origin: xyztranslation = Posr[xr,yr,zr] — VirOrigin[xvo, yvo, zvo] (4.19) 2. Rotation of the translated target position to correspond to the new virtual camera coordinate axes: — T — T Posv = [xvT,yvT,zvT] xyztranslation (4.20) The new coordinates of the target relative to the virtual camera coordinate system is Posv = [xPv, yPv, zpv]. To determine the position of the image of target on the position sensitive detector a simple mapping from the virtual target coordinates to position sensitive detector coordinates is performed: xpvF ypvF xpsd = ypsd = (4.21) Zpv Zpv 4.2.2 Target Image Movement About A Set Point The preceding analysis can be used to determine target image point location on the position sensitive detector when the mirror angles are perturbated about a known lock center. The lock center is defined as the required angular position of beta and theta such that the taiget image point is centered on the position sensitive detector for given geometric parameters. The movement of theta and beta in known increments about this center produces a mapping showing how the light spot will be imaged 66 Chapter 4: Tracking System Modelling and Control for specific changes in theta and beta about a known lock center. This mapping provides estimates of the inverse Jacobian of the system, relating changes in mirror angles to changes in target image position, in a localized region. The analysis can be simplified by performing a transformation of the imaged target coordinates from the PSD axes to the rotated axes system (Fig. 4.40). Figure 4.40: PSD Coordinate Definitions If another set of axes are located on the PSD surface and rotated through an angle 0 (present 0 angle of the gimbaled mirror), then the transformed imaged target coordinates are given by: Xrot ' COS (6) sin(0)" Xpsd . - s i n ( 0 ) cos(0). _ypsd_ (4.22) If the target is positioned so that the lock centre is at /? = 45 degrees, and 0 = 0 degrees the following contours are obtained for 0.25 of a degree increments in /? and for 0.5 of a degree increments in 0 (Fig. 4.41). The mappings presented in this section (Figures 4.41, 4.42, and 4.43) are displayed after the rotation has taken place therefore the displayed axes are the xTOt and y r o t axes. 67 Chapter 4: Tracking System. Modelling and Control 7 . 0 0 - 1 I I I I I -7.00 - i I I I I I -6.00 -4.00 -2.00 0.00 2.00 4.00 6.00 PSD X Rotated Position (mm) Figure 4.41: Imaged Target Position About The Lock Centre /? = 45 Degrees And 0 = 0 Degrees: 0 Increment = 0.5 degrees & 6 Increment = 0.25 degrees Similar image maps can be constructed for the entire workspace of the target tracking system. The results obtained from the described mapping procedure indicate that maps along constant beta arcs, are neady identical. Maps along a constant beta arc consist of the set of maps obtained for a given set of lock centers, all having a common beta angle, /?. Changes in image map contours are much more responsive to changes in beta angle along constant theta arcs. Maps along a constant theta arc consist of the set of maps obtained for a given set of lock centers, all having common theta angle, 6. Map transformations when moving between adjacent lock centers are sensitive to changes in beta angle and relatively insensitive to changes in theta. The following image maps for various lock centers illustrate this property (Fig. 4.42). 68 Chapter 4: Tracking System Modelling and Control PSD X Rotated Position (mm) P S D X R o t a t e d P o $ i , i o n <mm> Lock Center: p =45 degrees Lock Center: (3 =45 degrees 9 =45 degrees 0 = -45 degrees Figure 4.42: Imaged Target Position Maps For Various Lock Centers: /? Increment = 0.1 degrees & 9 Increment = 0.5 degrees These modeled results suggest that the estimates of the inverse Jacobian of the system may be realized as a function of beta angle alone. This is significant, as the set of imaged target position maps obtained along a constant theta arc may then be expanded to represent the entire workspace of the system. The maps of the target light position obtained for lock centres along the constant theta arc, 0 = 0 degrees, were selected to illustrate map variations for a set of lock centres which vary from f3 = 20 degrees to /? = 70 degrees in 5 degree increments (Fig. 4.43). 69 Chapter 4: Tracking System Modelling and Control -650.00 -I I I I 1 I I -6.00 -4.00 -2.00 0.00 2.00 4.00 6.00 PSD X Rotated Position (mm) Figure 4.43: Comparison of Imaged Target Positions For Lock Centres ft = 20 to 70 Degrees And 0 = 0 Degrees: /? Increment = 5.0 degrees & 6 Increment = 0.5 degrees The preceding analysis was used in the development of the algorithm which determined estimates of the required change in the theta (A0) and beta (A/?) angles in order to drive the imaged target to the detector's center (Figure 4.36). It was noted that 1. once the rotation is performed the mapping is similar throughout the workspace and 2. the linearity of the mapping increases towards the detector's center. Also, the desired image target position is the center of the detector, therefore, selection of constant values of and | £ was attempted in the control strategy in order to simplify system control. This scheme models the nonlinear mapping as a linear grid and produces estimates of the required change in theta (A0) and beta (A/?) in order to drive the imaged target to the detector's 70 Chapter 4: Tracking System Modelling and Control center. The required change in 9 and /? is given by: dd A9 = ^ A x (4.23) ox and A/3 = j£ Ay (4.24) where (Ax, Ay) = (xrot,yrot) is the present imaged target position after rotation through 0 degrees (Figure 4.40). The results of simulations performed on the model using this control algorithm indicated convergence of the imaged target to the detector's centre for constant values of | | , and §£. In the developed system, constant values of, f£ = 0.3117 degrees/mm, and § | = 0.1673 degrees/mm, were used in the prototype target tracking station. These values were found to give good performance over the working volume of the system (See section 4.3). Updates to theta and beta angles occurred on every third sampling interval (3 msecs for sampling intervals of 1 msec). 4.3 Target Tracking System Static Testing Static testing of the target tracking system was performed in three areas of the instruments workspace. The three tests are distinguished by the approximate beta and theta angles which were obtained as a result of target locking. These angles along with the associated test name are shown in Table 4.4. Stated angles are approximate as a calibration of the system has not been performed. Test Name Approximate Theta Angle Approximate Beta Angle Centred Target Test 2 degrees 41 degrees Right Target Test 34 degrees 53 degrees Left Target Test -38 degrees 28.22 degrees Table 4.4 Static Testing: Approximate Theta and Beta Angles The motor parameters used for each of the static locking tests are displayed in Table 4.5. The Drive Motor Kp Kd K i Theta Axis 2.5 0.03 0.1 Beta Axis 3.0 0.02 1.0 Table 4.5 Static Testing: PID Controller Values 71 Chapter 4: Tracking System Modelling and Control controllers sampling time was set to 1 millisecond. Data collection for each of the static tests proceeded after the stationary target had been captured and after waiting a short time period to allow for any transients to be damped. Motor angles, beta and theta, and detector camera signals, Ay, y, Ax, and A x , were recorded over 10 seconds at time intervals of 1 millisecond resulting in the collection of 10,000 data points for each variable. The frequency of occurrence of each of the measured beta and theta angles for each of the tests is displayed in Figures 4.44,4.45, and 4.46. The relative change in angle is displayed as opposed to specific angle values because the system has not been calibrated and therefore absolute angles are not known. Figure 4.44: Histogram of Centered Target Test 72 Chapter 4: Tracking System Modelling and Control 8 Angle p Angle (Bar Spacing = One Bit) Figure 4.45: Histogram of Right Target Test 73 Chapter 4: Tracking System Modelling and Control 0 Angle P Angle (Bar Spacing = One Bit) Figure 4.46: Histogram of Left Target Test Adjacent bars in each of the preceding histograms represent a single bit change in the recorded angular data which is equal to 0.034 degrees for the theta histograms and 0.018 degrees for the beta histograms. The maximum excursion of the imaged target position from the detector's centre was ± 0.07 millimeters. 4.4 Target Tracking System Dynamic Testing Dynamic testing of the target tracking system was performed using a motor to spin the target in a circular trajectory at a constant speed. The target and associated spinning apparatus was positioned relative to the target tracking system to try and achieve the following geometry; the plane of rotation of the target was oriented parallel to the xp — zp plane of the target tracking system at an offset distance of approximately 70 cm and the centre of the traced circle was positioned so that it was on 74 Chapter 4: Tracking System Modelling and Control the yp axis of the mirror coordinate frame (Figure 4.47). The target traces a circle of radius 23.5 cm at an offset distance of 70 cm from the xp — zp plane. Figure 4.47: Dynamic Testing Configuration This geometry was selected because the target remains at a constant radial distance (74 cm) from the mirror for all positions of the target. Consequendy, the excursion of the imaged target from the detector's centre is indicative of the tracking error of the motors. In this test configuration a radial error of 1 mm on the photodiode detector translates to a mirror tracking error of 10.6 mm (target distance 80 cm) . Mirror tracking error refers to the perpendicular distance from the position of an imaginary target, to the actual target. The imaginary target is located in the same focal plane as the actual target for the given theta and beta angles and would be imaged on the center of the camera detector given present mirror angles (Figure 4.48). If the imaged target position falls off the center of the detector a closer estimate of the vector to the target can be obtained by using the detector information as well as the angles of the mirror rotation axes. The camera is able to track target image signals which are of a lower frequency than the cutoff frequency of the camera ( « 133 Hz) . The position and orientation of the virtual camera can be determined for given mirror axes (See section 4.2 on the camera model). The ray to the target can be determined using this model if the virtual camera position and orientation, along with 75 Chapter 4: Tracking System Modelling and Control the given target image position on the camera and a camera calibration model to compensate for nonlinearities in the system optics are known. Figure 4.48: Tracking Error Geometry Two tests were performed using the described experimental setup. In these tests the target was rotated at 40 rpm initially and then speeded up to 150 rpm. The motor angles and the imaged target position on the camera detector was recorded at each sample time. The motor parameters for the dynamic testing were set to the same values as for the static tests (Table 4.5). The sampling rate was set to 1 msec. The results of these two dynamic tests are illustrated if Figures 4.49, 4.50, 4.51, and 4.52. 76 Chapter 4: Tracking System Modelling and Control V Chapter 4: Tracking System Modelling and Control 0.00 <WO M O 0.60 0.10 100 Time (seconds) Figure 4.51: 150 RPM Test Theta and Beta Angle vs Time 5.00 — 5.00 -I I I I I I 0.00 0.20 0.40 OM 0.10 1.00 Time (sees) Figure 4.52: 150 RPM Test X And Y Detector Positions As shown previously, the tracking error can be derived from the position of the image on the detector (Figures 4.52 and 4.52). The radial distance of the imaged target from the detector's center is 78 Chapter 4: Tracking System Modelling and Control the squareroot of the sum of the squares of the position sensitive detector x and y positions. Consider first the low frequency components of the imaged target detector positions. The phase of the x and y detector position waveforms are indicative of a circular target image trajectory. For example, an imaged target which traces a circular path of radius r at a constant angular velocity of 7 about the detector's center, would have x and y imaged target locations of x = r cos (7) and y = r sin (7). This result is consistent with the low frequency components of the waveforms in Figures 4.52 and 4.52. The high frequency components of the imaged target detector position are manifested as a variation of the radius of the traced circular path as the imaged target moves about the detector's center. These high frequency components are due to the linearization of the detector mapping when determining values of Ad and A/? in the control algorithm (Section 4.2.2). These high frequency components occur when the target image crosses a switching line causing the controller output to switch siga The switching lines {xrot,y™t) of the controller vary with the 9 drive angle (Figure 4.40). Also, the linearized target mapping results in the calculation of poor estimates of A/3 values for imaged targets which are relatively distant from the detector's center. These poor estimates causes the target to cross the xTOt axis which results in a change in sign of the beta drive signal. For example, consider the development of the estimates of the angles AO and A/3 for imaged target points which only require a change in the 6 drive angle in order to move the target to the detector's center, due to the control strategy linearization the estimated value of A/3 is not zero and will cross a switching line as the target is driven towards the detector's center. The frequency of these imaged detector high frequency components is well below the cutoff frequency of the camera and therefore the camera imaged position is representative of target position (little phase lag). The sequential positions of the imaged target trace a circle of varying radius over the detector surface. The maximum radius to the target during the 40 R P M test is less than 2 millimeters which corresponds to a tracking lag, due to the mirrors alone, of 2.12 cm. The maximum radius to the target during the 150 R P M test is close to 5.5 millimeters which corresponds to a tracking lag of 5.83 cm. Note that the image of the target is coming dangerously close to falling off the edge of the position sensitive detector during the 150 R P M test (PSD size 13 mm x 13 mm). The system was not able to track the target for angular velocities much above 150 rpm. Angular velocities of 150 rpm or equivalently 15.7 radians per second in the 79 Chapter 4: Tracking System Modelling and Control given experimental setup correspond to maximum target velocities of 3.68 meters per second at an offset distance of 74 cm. Consequently, the system specification for maximal tracking velocity is: , Vtracking = 4.98 • drad.0' meters/second (4.25) where dra<aai is the radial distance from the target to the mirror coordinate frame. 80 Chapter 5: Conclusions Chapter 5 Conclusions A novel point ranging and tracking instrument has been proposed and a single tracking station and associated target has been assembled and tested. The results of static and dynamic testing performed on separate components of the system (camera and gimbaled mirror system) and on the integrated system as a whole have been presented. These results confirm the suitability of the proposed instrument to the intended application of endpoint tracking of large work volume manipulators. Limitations of the proposed instrument and suggestions for the minimization of associated problems have been presented; for example, the use of redundant target tracking stations can be used in order to reduce the missing parts problem. Factors which make the designed instrument particularly suited to the intended application are: 1. The developed target tracking instrument is capable of tracking the target in a large working volume (Figure 5 . 5 3 ) , therefore few tracking stations are required in order to track the target over a wide area. The bounds of this large working volume can be further expanded by increasing target intensity and by increasing motor excursion angles. Figure 5.53: System Working Volume 81 Chapter 5: Conclusions 2. Manipulator static endpoint positioning resolutions of approximately 0.1 degree in each of the theta and beta axes are achievable (Figures 4.44, 4.45 and 4.46). 3. The instrument is able to track targets at velocities up to 4.9 meters per second at a standoff distance of 1 meter from the mirror coordinate frame. This feature allows the instrument to track rapidly moving manipulators. The fast response of the system makes the endpoint tracking instrument suitable for varied applications including endpoint control of flexible manipulators and manipulator dynamic studies. 4. Each target tracking station is modular and uses commonly available electronic parts, therefore, the cost of production is expected to be relatively low (Camera hardware cost approximately t $600 not including detector). 5.1 Contributions I developed and constructed the camera and target instrumentation. The camera instrumentation has the following special features: 1. the carrier is recovered from the incoming signal so that no hardwire connection is required from the target to the target tracking station; 2. the ability to dynamically change the instrument's channel gains allows the camera to track the target over a wide range because the system can determine imaged position for a wide range of received intensities; and 3. the phase sensitive detector filters the contribution of the background light from the image. I derived and tested a simple control strategy for driving the image of the target to the detector's center. This control strategy allowed integration of individual system components into a complete working instrument. 5.2 Suggestions For Further Work Further work could include development of a dual scanner system to complete the triangle of the triangulation system. 82 Chapter 5: Conclusions Suggested improvements to the system include: 1. the use of a wider emission angle multi-element emitters in the target circuitry such as the Hamamatsu L2168 would increase the allowable orientation of the target relative to the tracking station and would increase the working range of the camera; 2. the use of scanner motors with larger excursion angles to increase the systems working volume; 3. lower the spring constant and friction coefficient of the springs on the beta drive axis to improve system response; 4. the use of high precision angular encoders for motor angle sensing. Present encoders have nonlinearities and are susceptible to noise because of their analog nature; 5. the use of a pin cushion type detector would lower camera resolution for imaged target points not on the detector's center, however, this detector can be obtained for substantially lower cost. 6. lower preamplifier gain and use filters with a sharper cutoff within the phase sensitive detector so that the camera can be used with higher background levels of light. This technique would help prevent system washout where output levels drift to power supply rails when background light levels are very high; 7. target tracking control is simple and requires little outside information. Therefore, one could replace the Ironies processor with an independent controller integrated circuit. The developed system could be used with a fixed plane mirror in a known location to perform static triangulation. Alternatively, an array of targets with a known geometry could be used to perform triangulation on a stationary target 83 Chapter 5: Conclusions References [I] H. R. Everett. Survey of collision avoidance and ranging sensors for mobile robots. Robotics and Autonomoous Systems, 5:5-67, 1989. [2] Andre Sokalski. Laser range sensing: An alternative approach to machine vision. AP and T, pages 24-25. [3] Paul J. Besl. Advances in Machine Vision: Applications and Architectures. Springer-Verlag, New York, 1989. [4] R. A . Jarvis. A perspective on range finding techniques for computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5(2):122-139, 1983. [5] Masanori Idesawa. Optical range finding methods for robotics. 4th International Symposium on Robotics Research, pages 91-98, 1988. [6] F. R. Livingstone and M . Rioux. Development of a large field of view 3-d vision system. SPIE Optical Techniques for Industrial Inspection, 665:188-194, 1986. [7] G.A. Parker, R. J . R. Mayer, I. G. Taylor, and D. G. Bailey. Target tracking system. U.S. Patent 4,866362, Electrical: 1321, September 12 1988. [8] Andrew Dainis and Marius Juberts. Accurate remote measurement of robot trajectory motion. IEEE Robotics and Automation, pages 92-99, 1985. [9] M . Rioux and F. Blais. Compact three-dimensional camera for robotic applications. Journal of the Optical Society of America, 3:1518-1521, 1986. [10] K.K. Yeung and P.D. Lawrence. A low-cost 3d vision system using space encoded spot projections. Proceedings of SPIE Conference on Optics, Illumination and Image Sensing For Machine Vision, 728:160-172, 1988. [II] J. Domey, M . Rioux, and F. Blais. 3d sensing for robot vision. Vision Interface '89: Workshop on Range Image Understanding, 1989. 84 Chapter 5: Conclusions [12] S. C. Pomeroy, H. J. Dixon, M . D. Wybrow, and J. A . G. Knight Ultrasonic distance measuring and imaging systems for industrial robots. Proceedings of the 5th International Conference on Robot Vision and Sensory Controls, pages 239-249, 1985. [13] Science Accessories Corp. 3-dimensional sonic digitizer model gp-8-3d. Technical Specifica-tions. [14] J . H. Gilby and G. A . Parker. Laser tracking system to measure robot arm performance. Sensor Review, pages 180-184, Oct 1988. [15] R. Mayer and G. A . Parker. Optical considerations in a 3d laser tracking instrument Some Journal, Some Year. [16] M . Donath, B. Sorensen, G. B. Yang, and R. Starr. Tracking 3-d body motion for docking and robot control. Proceedings of the Workshop on Space Telerobotics, pages 31-43, 1987. [17] H. R. Everett and A . M. Flynn. A programmable near-infrared proximity detector for robot navigation. SPIE Mobile Robots, 727:221-230, 1986. [18] Francois Blais and Marc Rioux. Biris: A simple 3d sensor. Proceedings of SPIE: Optics, Illumination, and Image Sensing For Machine Vision, pages 235-242, Oct 1986. [19] G. Kinoshita, M . Idesawa, and S. Naomi. Robotic range sensor with projection of bright ring pattern. Journal of Robotic Systems, pages 249-257, 1986. [20] G. Z. Grudic, M . F. Kelly, and P. D. Lawrence. Mirror scanner. US Patent 4,941,739, Filing Date /01/17/89 1989. [21] M . Ishii, S. Sakane, and M. Kakikura. A 3-d sensor system for teaching robot paths and environments. The International Journal For Robotics Research, 6:45-59, 1987. [22] Takeo Kanade. Cmu image understanding program. Proceedings of Image Understanding Workshop, pages 40-52, Apr 1988. [23] Darold Wobschall. Circuit Design For Electronic Instrumentation Analog and Digital Devices From Sensor To Display. McGraw-Hill, New York, 1987. [24] Hammamatsu Corp. Position sensitive detector technical notes. Hammamatsu Publication. 85 Chapter 5: Conclusions [25] T. H. Wilmshurst. Signal Recovery From Noise In Electronic Instrumentation. Adam Hilger Ltd, Techno House, Redcliffe Way, Bristol, BS1 6NX, 1985. [26] TRW Corp. Infrared led technical notes. TRW Publication. [27] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetteriing. Numerical Recipes in C The Art of Scientific Computing. Cambridge University Press, Cambridge, 1988. [28] James H. Aylor, Robert L. Ramey, and Gerald Cook. Design and application of a microproces-sor pid predictor controller. IEEE Trans, on Industrial Electronics and Control Istrumentation, 27(3):133-137, Aug 1980. [29] K. Ogata. Discrete-Time Control Systems. Prentice Hall, Englewood Cliffs, New Jersey, 1987. 86 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items