UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

High resolution wide angle optical position detector Tan, Chang Kian 1994-12-31

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_1994-0302.pdf [ 2.31MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0065110.json
JSON-LD: 1.0065110+ld.json
RDF/XML (Pretty): 1.0065110.xml
RDF/JSON: 1.0065110+rdf.json
Turtle: 1.0065110+rdf-turtle.txt
N-Triples: 1.0065110+rdf-ntriples.txt
Original Record: 1.0065110 +original-record.json
Full Text
1.0065110.txt
Citation
1.0065110.ris

Full Text

HIGH RESOLUTION WIDE ANGLE OPTICAL POSITION DETECTOR  by CHANG KIAN TAN B. Eng. (Electrical Engineering), Technical University of Nova Scotia, 1991 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (DEPARTMENT OF ELECTRICAL ENGINEERING)  We accept this thesis as conforming to the reauired standard  THE UNIVERSITY OF BRITISH COLUMBIA  April, 1994 © Chang Kian Tan, 1994  In  presenting  degree  this  at the  thesis  in  partial  fulfilment  of  University of  British  Columbia,  I agree  freely available for reference copying  of  department publication  this or  thesis by  for  his  of this thesis  and study. scholarly  or for  her  I further  purposes  gain shall  permission.  Department of  ELECTRICAL ENGINEERING  The University of British Columbia Vancouver, Canada  Date _ 2 1 s t . A p r i l ,  DE-6 (2/88)  1994  requirements that  agree  may be  representatives.  financial  the  It not  that  the  allowed  advanced  for  by the  understood be  an  Library shall make  permission  granted  is  for  that without  head  it  extensive of  my  copying  or  my written  Abstract A wide angle high resolution position sensing system for telerobotic system endpoint tracking and position calibration has been developed and tested. Compared to existing commercial products, the system developed has the following important features: (a) wide angle detection without any moving parts; (b) high resolution; (c) fast response; (d) low cost; and (e) robustness.  The system consists of a light source, a simple optical mask, a CCD camera and a basic signal processing system. The optical mask contains specially arranged pinholes, each is dedicated to sense the position of the light source when it is within a small designated solid angle. The solid angles associated with adjacent pinholes overlap sufficiently to ensure continuity of the position sensing. The position is sensed through processing the pattern of the few light spots cast through pinholes on the mask onto the CCD sensor. Depending on the pinhole arrangement, the signal processing needed to determine the light source angular position can be cut down significantly. In general, the optical mask can be in any convex shape with the pinholes arranged in any known and easily processed pattern.  11  Table of Contents Abstract List of Tables List of Figures Acknowledgments 1 Introduction 1.1 Background and Motivation 1.2 Overview of Existing Sensing Methods 1.2.1 Interference method 1.2.2 Tracking method 1.2.3 Lens Focusing Method 1.3 Objective of This Research 1.4 Applications 2 Preliminary Development 2.1 Multi-Sensor Multi-Lens System (MSMLS) 2.1.1 Potential Problems 2.2 Single Sensor with Multi-Lens System (SSMLS) 2.2.1 Potential Problems 2.3 Multi-Pinhole Single Sensor System (MPSSS) 3 Multi-pinhole Mask Design Theory 3.1 Coded-Aperture — History • 3.1.1 Multi-Pinhole Mask — Rings 3.1.1.1 Circular Ring Projection 3.1.2 Multi-pinhole Mask— Pinholes 3.1.2.1 Pinhole Camera — Theory of Operation 4 System Design 4.1 Target Subsystem 4.2 Receiver Subsystem 4.2.1 Multi-Pinhole Mask 4.2.2 Charge-Coupled Device (CCD) Camera 4.2.3 Frame Grabber — DIGICOLOR in  ii v vi viii 1 1 1 2 4 5 7 8 9 12 .14 15 17 18 20 21 23 25 27 27 31 31 33 35 35 37  5 System Implementation 5.1 Image Processing 5.1.1 Thresholding and Median Filtering 5.1.2 Centroid Computation 5.1.2.1 Weighted Centroid Estimator 5.2 Sensing Process 5.2.1 Curve Fitting 5.2.1.1 Existence of A Solution 5.3 Calibrating Process 6 Experimental Results 6.1 Comparison of Camera Lens and MPSSS 6.2 Processing Time 6.3 Resolution Testing 6.4 System Limitations And Suggestions For Improvement 7 Conclusion 7.1 Summary 7.2 Contributions 7.3 Suggestions For Future Work Bibliography Appendix A Plane Mask Appendix B Cylindrical Mask Appendix C How To Make The Multi-Pinhole Mask Appendix D Multi-pinhole Diagram Appendix E Samples of Captured Image and Curve Fitting  IV  39 41 41 42 43 45 46 48 50 52 52 55 58 60 62 62 63 64 65 68 70 72 74 75  List of Tables Javelin JE 3362 Chromachip Solid-State Color Camera Characteristics  37  The Computed Light Spot Center Using Different Threshold Values — Centroid Estimator  43  The Computed Light Spot Center Using Different Threshold Values — Weighted Centroid Estimator  44  Results of Sub-pixel Resolution of Individual Rings  60  V  List of Figures 1.1  Beams In Amplitude Comparison Monopulse Radar  3  1.2  Smith Tracking Station and Target  4  1.3  Wide Angle Pinhole Camera  6  2.4  The Relationship Between Pinhole to Sensor Distance And Sensor System Resolution  12  2.5  Multi-Sensor Multi-Lens System  13  2.6  Single Sensor with Multi-Lens System  16  2.7  Example Barrel Distortion  17  2.8  The Multi-pinhole Single Sensor System  19  3.9  Coded-aperture Imaging System  22  3.10  Multi-Pinhole Mask— Rings  24  3.11  Projection of A Circular Ring  26  3.12  Image Captured Using Pinhole With A Radius of 0.5 mm . . 29  3.13  Image Captured Using Pinhole With A Radius of 0.25 mm . 2 9  3.14  Image Captured Using Pinhole With A Radius of .20 mm . . 29  4.15  Target Subsystem Block Diagram  32  4.16  JE 3362 Spectrum Sensitivity  33  4.17  Receiver Subsystem Block Diagram  34  5.18  Implementation Block Diagram of MPSSS  40  5.19  Block Diagram of The Sensing Process  45  5.20  Plot For \A\ With x1 = x2 and y\ ± y2  49  vi  5.21  Plot For \A\ With xi ^ x2 and yx ^ y2  49  5.22  Pinhole Calibration Geometry  51  6.23  Error of The Multi-Pinhole Single Sensor System vs. Normal Lens Sensing System  53  6.24  Image Processing CPU Time With Thresholding And Filtering 56  6.25  Image Processing CPU Time Without Thresholding And Filtering  57  6.26  Ring 1 Resolution Tests — Light Spot 1  59  7.27  System Working Volume  63  A.28  Plane Mask  68  B.29  The Semi-cylindrical Mask  70  D.30  A Multi-pinhole Diagram  74  E.31  Captured Image With Spikelike Noise When The Target Light Source is at 0° Azimuth and 90° Elevation  E.32  Captured Image When Target Light Source is at 90° Azimuth and 40° Elevation  E.33  75  76  Captured Image With Non-uniform Illumination When The Target Light Source is at 45° Azimuth and 45° Elevation . . . 77  Vll  Acknowledgments I would like to thank my supervisors Professor C.C.H. Ma and Professor Peter Lawrence for their guidance and support. I am grateful to them for introducing me to this challenging world of robotic sensors, vision and control. During my thesis work, Dr. Ma and Dr. Lawrence gave me many helpful comments that often redirected my thinking and my research emphasis. Especially Dr. Ma, he has devoted all his valuable times in this work. Without him and Dr. Lawrence, this thesis would not have been possible.  I would also like to thank Professor D. Kirkpatrick for his contribution into this research, as well as all my friends and fellow students who have made my study here in Vancouver pleasant. For my family, I would like to thank them for being so supportive during my study in Canada.  vin  Chapter 1: Introduction  Chapter 1 Introduction 1.1 Background and Motivation This thesis documents a research project motivated by the desire to develop a high quality direct robot end point sensor. Over the past several years, various robot end point sensors have been developed in Europe, Japan, and the United States. Typical areas of application of robot end point sensors are: monitoring the position of a rock drilling machine, controlling the end point position of a manipulator arm, and positioning radioactive materials in atomic reactors where high precision position sensors are needed. For a small rigid manipulator arm, highly precise position measurements can be achieved by simply installing high precision linear or angular encoders on the joints. For large and flexible robots such as excavators or space manipulators, however, end point positioning through joint angle sensing remains a problem. The existing joint angle sensing systems for large and flexible equipment are usually both fragile and expensive. In addition to this, they are usually difficult to install. Any flexing of the links due to heavy loads make joint angle measurements inadequate for computing the end point position. For these and other reasons, alternative sensing methods are being explored.  1.2 Overview of Existing Sensing Methods The design of any position sensor involves certain basic considerations. Of prime concern is whether the field of view and the range of the device will be adequate for the environment l  Chapter 1:  Introduction  it is sensing. To improve on existing position sensors, a newly designed sensor must first provide, the same or a higher measurement accuracy and sensing resolution. Second, its data collection rate must be appropriate for the speed of the observed system. Third, the improved sensor must be easy to install, maintain and interface, be robust to environmental changes, and be less expensive than the current model. And fourth, ideally the new sensor should be small and easily manufactured. Over the past twenty years, various methods have been developed for sensing the 3-D position of a target in space. All these methods require knowledge of the angular position of the target with respect to a predefined spherical coordinate frame [1]. From the angular position of the target, methods such as triangulation and time of flight are used to determine the range and hence the 3-D position of the target. Therefore, to have a high resolution 3-D position sensing system, one must first design a high resolution angular position sensor. A survey of existing position sensing methods was done to determine the state-of-the-art in angular position sensing. In general, angular position sensing methods consist of interference [2, 3] , tracking [4, 5] and lens focusing [6, 7] methods. A description of each method, including the benefits and disadvantages, are presented below.  1.2.1 Interference method Position measurement systems based on an interference method have received considerable attention in recent years [2]. In the interference method, an active source such as an antenna is used to propagate a radar wave in the general direction of the target of interest. The angular position of the target with respect to the antenna coordinate frame is deduced from the frequency or amplitude change of the reflected waves. The absolute angular position 2  Chapter 1:  Introduction  of the target is determined by translating the angular position of the target in the antenna coordinate frame into a predefined coordinate frame of interest (base frame). A classic example of an angular position detector based on the interference method is the amplitude comparison monopulse radar. In the amplitude comparison monopulse radar, the angular position of the target is deduced from the amplitude change of the reflected waves. For simplicity of explanation, begin by assuming there are two radar antennas transmitting two radar beams, A and B, at slightly different angles as shown in Figure 1.1. The crossover axis of the beams, instead of pointing precisely at the target, points in a direction slightly off to one side of the target. If the target is on the crossover axis, the reflected waves (echoes) received by both antennas are equal in amplitude. If the target is closer to the center of beam A, the echo from beam A will be stronger in amplitude than that from beam B. Based on the amplitude difference between the two echoed beams, the angular position of the target relative to the radar crossover axis can be determined [8],  Beam B Figure 1.1 Beams In Amplitude Comparison Monopulse Radar  In practice, the transmitting antennas, even if adjusted initially to transmit equal gain and phase beams, will vary unequally as a function of time and environmental temperature 3  Chapter 1:  Introduction  conditions. This will result in large drifts in the angular position measurements and the need for frequent tuning.  1.2.2 Tracking method A second sensing method, tracking, is most widely used technique in optical and radar sensing [5] . In tracking, the angular position of a point (target) is deduced from the direction of the observed target with respect to a predefined spherical coordinate frame. An example of the active tracking system is the light spot detector developed by Smith shown in Figure 1.2 [4].  Pulse Infrared Target  Light Ray From Target  Mirror Lens Mirror Motor Drive And Angle Sensing On Theta Axis  (T\ .. ( \ / \ ii | yA \ ^ y \ \ \ ) /^\) '  Mirror Motor Drive and Angle Sensing On Beta Axis  Camera And Associated Instrumentation ^ \ "\^  -A--A—  2 7\  \ ^  2-DPo sistion £>ensiti /e Detector  Figure 1.2 Smith Tracking Station and Target  This tracking system consists of a single tracking station and a target light source. The tracking station is composed of a mirror and an infrared Position Sensitive Detector (PSD).  Chapter 1:  Introduction  The mirror in the tracking station tracks the target light source through orienting its pitchyaw angles so that the image of the target light source is centered on the Position Sensitive Detector. The angular position of the target light source is measured based on the pitch-yaw angles of the mirror with respect to a predefined mirror center spherical coordinate frame. The drawback with this system is the extensive operational maintenance required, especially for the moving parts, such as motors and springs. This is a significant factor for consideration because moving parts generally have a short life-span. Also, a very cooperative target light source is required because once the target light source is lost, finding and locking onto it again while it is in motion requires a scan of the whole work space.  1.2.3 Lens Focusing Method The oldest method in angular position detection is the lens focusing method [7]. This method deduces the pitch-yaw angles of the target from the measured incidence angles of the captured target image with respect to the optical axis of the lens used. To achieve a wide sensing range, a fisheye lens which can see a 2ir steradian field of view is used to map a hemispheric field onto a Charge Couple Device (CCD) array sensor [6, 9] . A classic example of the use of a fisheye lens is presented by Wood and Bound [10] . Bound used a hemispheric lens with a pupil at the center of the lens to map a hemispherical field of view to afilmas shown in Figure 1.3. Rays from a 180° field of view (FOV) are bent into a 84° cone due to refraction at the air to glass surface. When the light rays exit from the glass-to-air surface after traveling through the glass, they are everywhere perpendicular to the hemispherical surface. In this way, the rays undergo no refraction when they exit the glass to air surface. 5  Chapter 1: Introduction  Although the geometric properties of the 180° field of view have been distorted by the fisheye lens, the physical relations between the target and its surroundings are still maintained in the image. The target and its image have the same azimuth angle (or roll angle) with respect to the optical axis. The radial angle (/?) of the target and the target image location on the film have a one to one mapping relationship defined as follows [6]:  r = /tan/? where / is the focal length of the lens used; and r is the distance of the target image with respect to the center of the image plane. Figure 1.3 shows a film is used to capture the image. This is only good for passive sensing. To sense the target position continuously in real time, a CCD array sensor is usually used to capture the target image. Then the captured image is processed and analyzed to deduce the target angles. The system cost and speed of image acquisition in this type of 6  Chapter 1: Introduction  sensing system depends largely on the image processing hardware used. As parallel image processing hardware becomes more reasonably priced, lens focusing together with a CCD array sensor has great potential to become the primary target position sensing system. However, this system does have an inherent limitation.  Every sensor has a finite  resolution, and in a lens focusing sensing system, the resolution is inversely proportional to the field of view. The fisheye lens focusing method allows a target in a wider field of view to be sensed but as a result it reduces the resolution of the sensing system. When this problem is encountered in a tracking based sensing system, wide angle shaft position sensors are installed to solve the problem.  1.3 Objective of This Research A comparison of the above devices used in the applications of angular position sensing reveals these advantages and disadvantages: 1. Amplitude comparison monopulse radar is relatively expensive due to the requirement of the expensive high frequency hardware [11]. 2. Smith's "tracking" system is capable of making precise measurements but has mechanical moving parts. 3. Fisheye "lens focusing" devices have a wide field of view, but have poor sensing resolution. To overcome these shortcomings in current devices, a new sensing system was proposed and developed as the objective of this research. This new system has the following features: •  High resolution 7  Chapter 1: Introduction  •  Wide angle  •  Rugged and maintainable structure with no moving parts  •  Real-time position measurement capability  •  Small size, light weight, low cost and low in computing power requirement  1.4 Applications The newly developed sensing system detects the pitch and yaw angles of a point light source at high resolution over wide angular ranges. It can be used for those applications requiring high resolution positioning. In the field of robotic control, this system can be used as a position feedback device for telerobotic system endpoint tracking and for link position calibration. It also can be used for aligning a spacecraft during docking preparation and supervising the assembly of a space station. In the mining industry, this sensing system can be used to monitor the position and orientation of a rock drilling machine in an open pit. Other possible uses include surveying, object positioning and motion studies, biomedical gait measurements, and aircraft trajectory detection and tracking.  8  Chapter 2: Preliminary Development  Chapter 2 Preliminary Development This chapter presents the evolution of the ideas leading to the development of the new high resolution sensing system. In its preparation, four essential design requirements had to be met. First, the sensing system must have no mechanical moving parts. Second, it should require low computing power. Third, the sensing system must have high angular resolution. Finally, fourth, it must have a wide field of view. The first two of these requirements are largely concerned with costs. Mechanical moving parts, such as moving mirrors or other mechanical scanning devices, can be significant problems for a sensing system. This is because they tend to have shorter life spans in comparison with solid state electronics, especially in an industrial environment. Thus, mechanical moving parts involve maintenance as well as replacement cost. By contrast, operating costs are the issue for the low computing power requirement since computing power is still relatively expensive compared to other costs of the sensing system. The third requirement, high angular resolution, is concerned with the smallest change in the sensed angular position that a sensing system can report [12]. Resolution is also defined as the variance of the measurements due to the system noise [13]. For the purpose of calculating the resolution, let a measurement of the angular position (a) be defined as: a = a + n{  where 9  Chapter 2: Preliminary Development  a is the accurate angular position; and n;. is the detected noise. Noise in electronic instruments is largely the thermal noise. For simplicity of analysis of the sensor system to be presented, this noise will be assumed to be a Gaussian distributed noise with zero mean and a variance of a. The mean (a) of a finite number of measurements can be computed as: k  1  i=l  where the standard deviation of a would be a if infinite number of measurements were taken. For a system with Gaussian noise, a-2a  < &i < a+ 2a  (2.1)  with 95 percent probability. The resolution will be defined as a in this thesis. It is noted that if the a is smaller than the CCD pixel resolution, the system resolution will be determined by the CCD pixel resolution. For a sensor which senses the target position through a lens or a pinhole, the field of view is defined as the angular space (or the sensing volume) from which the sensor can receive the signal. Given a round sensor with a fixed diameter (D) , the field of view (6) of a lens is related to its focal length (d) by: 6 = 2 arctan The lens is assumed to be focused at infinity. However, the field of view for focused distances other than infinity may be found by performing the calculations using the field of view for a 10  Chapter 2: Preliminary Development  lens of focal length equal to the distance between lens and image. In the case of a pinhole camera, d is defined as the distance between the pinhole and the sensor. It is noted that if the sensing system has a relative angular resolution of A, the absolute angular resolution (A0) of a single sensor and lens system is related to the field of view by:  A0 = \6  (2.2)  where A0 = max{A0,}; and A#j is the resolution computed at the location i within the field of view. In other words, if one considers that the resolution of a sensor itself is, for example, YMQ — \ then in angular resolution terms, this becomes ^-, where A0 corresponds to the angle subtended by a pixel and 0 is the whole field of view. Thus, the fourth design requirement — that the new system has a wide field of view — is problematic. For a given relative angular resolution, the wide field of view and high resolution (small A0) are hard to achieve simultaneously for a single sensor and lens system. This is because the absolute angular sensing resolution of the system is proportional to its field of view as shown in equation 2.2. Hence, the absolute angular sensing resolution for such a sensing system cannot be improved without trading off the field of view as shown in Figure 2.4. With a fixed relative angular resolution, the absolute angular resolution of the sensing system is increased by increasing the distance between the pinhole and the sensor from d\ to c?2. However, the field of view decreases from 9\ to $2. 11  Chapter 2:  Preliminary Development Pinhole 2  Field Of View ® 2 For Pinhole 2  Pinholel  f *  Field Of View For Pinhole 1  Figure 2.4 The Relationship Between Pinhole to Sensor Distance And Sensor System Resolution  Nevertheless, during the initial design phase of this project, a multi-sensor, multi-lens system, which met the above critical requirements of a high resolution, wide-angle, low computing power sensing system without any mechanical moving parts, was investigated.  2.1 Multi-Sensor Multi-Lens System (MSMLS) The proposed design to meet the four criteria was a multi-sensor, multi-lens system (MSMLS). This design provides high angular resolution and widefieldof view by subdividing the sensing world into small pieces via the sensors and lenses as shown in Figure 2.5. The sensors, which can be Position Sensitive Detectors (PSD) or Charge Couple Devices (CCDs) array sensors, are used to obtain the position of the target light source on their surface. The lenses, placed in front of the sensors, transform the angular space of the sensed environment by mapping the pitch-yaw angles of the target light source onto a 2-D sensor plane. 12  Chapter 2:  Preliminary Development  OPTICAL LENS  PACKING  PACKING Figure 2.5 Multi-Sensor Multi-Lens System  In this design, the lenses and sensors are used to subdivide the sensing world so that the resolution of the sensing system can be increased without employing any mechanical scanning devices. Each lens and sensor unit is dedicated to sense the position of the target when it is within that unit's piece of the field of view. Such subdivision makes it possible to increase the angular resolution of the sensing system and still maintain the wide field of view. The relative angular resolution of the sensing system (A) can be defined as: A=  max {A0,}  Eft i=i  (2.3)  and 1  1 «=i  where A, is the relative angular resolution of each unit; 13  Chapter 2:  Preliminary Development  Oi is the field of view of each unit; A0j is the absolute angular resolution of each unit; and n is the number of units. And, the field of view of the sensing system (6) is defined as:  e = J2^ = m&x{A6i}-  (2-4)  »=1  (This analysis assumes there is not overlapping between the field of view of each sensor and all the sensors are identical.) From equation 2.3 and equation 2.4, it is obvious that the MSMLS has higher resolution and wider field of view compared to a pinhole sensor and lens system.  2.1.1 Potential Problems This system, however, has several potential problems. One of the problems is the continuity of the sensing range across several sensors. This problem can be solved by overlapping the field of view of each sensor. Hence, equation 2.3 and equation 2.4 become: n  6 = J2 k& 1=1  and * _ max {AOi} A —  (2.5)  —  £ M  t=i  max {A0,} ~ 6 where ki < 1.0 is the scaling factor of the effective field of view. As shown in equation 2.5, the field of view and the relative angular resolution of the sensing system reduce as the field of view of each sensor overlaps. 14  Chapter 2:  Preliminary Development  However, adding additional sensors contributes another problem: construction cost. Construction cost, including hardware, is a major consideration when developing a new sensing system because the newly developed sensing system is intended for industrial applications. One of the prime requirements in industrial tool development is to fulfill the low cost / high performance objective. The cost of the proposed sensors (the Hamamatsu PSDs and Texas Instrument CCD array sensors) is relatively expensive in comparison with other hardware such as the lenses. Therefore, there is an advantage in minimizing the number of sensors used. Multiple sensors also drive up costs because of the amount of calibration required by the sensing system. It is well documented that the amount of calibration needed for the sensing system increases with the number of sensors and lenses used. Therefore, reducing the number of sensors and lenses used can also reduce the calibration required and cut down the overall system cost. In order to reduce the hardware cost, a modification of the MSMLS design was carried out by reducing the number of sensors to one.  2.2 Single Sensor with Multi-Lens System (SSMLS) A single sensor multi-lens system (SSMLS) consists of multiple optical lenses, an optical multiplexer, a multi-faced mirror and a sensor as shown in Figure 2.6.  15  Chapter 2:  Preliminary Development  REFLECTOR MIRROR Figure 2.6 Single Sensor with Multi-Lens System  In this system, every optical lens is designed to have its own narrow field of view. The lenses are pointed at different aspects such that their aggregate field of view is complete. Through the lenses, the images are reflected onto a single sensor by a multi-faced mirror. Each image produced by a single lens is mapped onto the entire sensor. To ensure only one image from one lens gets to the sensor at a time, the images are optically multiplexed using an optical multiplexer. The optical multiplexer is a device which allows the light rays from the multiple lenses to pass through one at a time. An optical multiplexer can be constructed from a Seiko G645F transmissive Liquid Crystal Display (LCD) [14]. The Seiko G645F consists of an array of liquid crystals, each of which can be selectively turned on to allow light rays to pass through it. The light rays pass through to the multi-faced mirror, which is a mosaic 16  Chapter 2:  Preliminary Development  of hexagonal mirrors. Each hexagonal mirror then reflects the image of the target onto the sensor. In this way, the sensor assembles a unified, complete image of the field of view.  2.2.1 Potential Problems Although this system obviously cuts down hardware cost, it does not reduce the amount of calibration required by the sensing system. Calibrations are necessary to compensate for the non-linear effects introduced by the lenses, particularly the barrel distortion as shown in Figure 2.7.  Y n  •  X  Figure 2.7 Example Barrel Distortion  A linear model of each lens, as defined as follows, can be built to compensate for the distortions [15]: X{  =  Qx^m  i VxXffiV  = axxm + bx yxm -f xmym) and  VI = ayym + byymr2 = ayym + by(yfn + ymx2m)  where 17  Chapter 2: Preliminary Development  x\ and yi are the x and y axis of the linear model; and by are the model parameters; r is the length of the radius to the point; and xm and ym are the x and y axis detected data. However, a complex procedure is required to calibrate all the lenses. To overcome this problem, a system was proposed to replace the multiple lenses with multiple pinholes, as a pinhole is a linear optical device which is free of optical distortions. In order to keep costs lower, the newest design also considered modifying the multi-faced mirror, as its complex assembly and calibration procedures are expensive. Modifications of the SSMLS were carried out by replacing the optical lenses with pinholes and by leaving out the multi-faced mirror and the optical multiplexer. These changes resulted in a low cost, multi-pinhole single sensor system.  2.3 Multi-Pinhole Single Sensor System (MPSSS) The multi-pinhole single sensor system (MPSSS) consists of a two dimensional CCD array sensor enclosed within a hemispherical mask. The hemispherical mask contains a special arrangement of pinholes. If designed carefully, pinholes can replace expensive optical lenses to provide a low cost, robust and infinite-focus depth position sensing system. (A detailed discussion of the pinhole theory of operation is presented in Chapter 3.) In the MPSSS, each pinhole is designed to sense the position of the target light source when it is within the pinhole's designated field of view as shown in Figure 2.8. The field of views shared by adjacent pinholes overlap sufficiently to provide continuity of position 18  Chapter 2:  Preliminary Development  sensing. The position of the target light source relative to the sensor center is obtained through processing the detected image spots which pass through the pinhole mask.  Mask ^>s  / Pinhole^.  Target Light Source (Moving)  Detected Image Spots CCD SENSOR Figure 2.8 The Multi-pinhole Single Sensor System  19  Chapter 3: Multi-pinhole Mask Design Theory  Chapter 3 Multi-pinhole Mask Design Theory The previous chapters have discussed the development of the multi-pinhole single sensor system through three design phases. To understand the operations of the MPSSS fully, it must be studied in its component parts. This chapter considers the multi-pinhole mask in detail. This mask contains a group of pinholes. Every pinhole has its own field of view of the different parts of the sensed environment. Because each image of the sensed environment is mapped onto the entire sensor, the angular resolution of each image is basically dependent on the resolution capability of the sensor. Hence, the overall resolution is improved above the sensor resolution by approximately the number of images that can be sensed through the multiple pinholes. The mask works to increase the FOV while maintaining a high degree of angular resolution of the light source position. The pitch-yaw angles of the light source are determined by studying the positions of the detected light spots on the sensor. From the detected light spots, projection equations are used to determine the light source location. Projection equations are a set of straight line equations used to mathematically represent the mapping of the light source onto the sensor via the pinholes. This equation is defined as: x  p &h x0 - xh  Vp ~ Vh y0- Vh  z  p ~ zh Zo- Zh  where (xp,yp,Zp)  is the position of a detected light spot on the CCD sensor; 20  .,,, fi\  Chapter 3: Multi-pinhole Mask Design Theory  (x0,y0,z0)  is the position of the light source; and  (xh^yh^h)  is the position of the pinhole.  In order to determine the pitch-yaw angles of the light source from the position of the light spots on the sensor, one needs to identify from which pinholes were these light spots projecting through. If the pinholes on the mask were arranged randomly, the number of equations needed to determine the pitch-yaw angles of the light source depends on the number of detected light spots and the number of pinholes. With multiple pinholes, a huge set of equations is needed. For example, if five light spots are detected when the light source is within the sensing range and there are 20 pinholes, /2Q%\\ — 1,860,480 sets of equations may need to be solved to find the intersection point which defines the pitch-yaw angles of the light source. In theory, if the pinholes are arranged randomly, only one of this many set of straight line equations will yield a solution while the rest of them will have no solutions in theory. To find the solution out of this number of sets of equations, as well as the numerical precision needed, will be insurmountable problems in practice. This problem motivated us to look into the coded-aperture imaging principle. In this method, the pinhole locations are specially arranged in a pattern that allows certain patterns of light spots to be detected depending on the light source location.  3.1 Coded-Aperture — History The principle of coded-aperture imaging was first suggested by Dick in 1968 [16]. His innovation involves replacing the lens in a conventional imaging system by a "coded-aperture" which contains a carefully chosen pattern of transparent and opaque regions in it. This idea 21  Chapter 3: Multi-pinhole Mask Design Theory  increases the field of view of a pinhole camera without sacrificing the resolution. When incoming light rays pass through the mask, they are coded by an intensity mask pattern as shown in Figure 3.9. From the detected pattern, the angular position of the light source can be deduced [17, 18].  Light Source (Moving)  Coded-Aperture Mask  Sensor Figure 3.9 Coded-aperture Imaging System  The technique for determining the light source location is to cross-correlate the detected pattern {I(x,y))  with the coded-aperture mask (M(x,y)).  Depending upon the angular  position of a moving light source, the detected pattern can be characterized as a shift in coded-aperture mask, I(x,y)  = M(a — x,b — y). This implies that the normalized cross-  correlation (G(x,y)) will have a maximum value if and only if the location of the mask pattern exactly matches a simultaneous image on the sensor plane. In practice, the correlation 22  Chapter 3: Multi-pinhole Mask Design Theory  technique requires a large amount of computational power and memory. G(x, y) = j J HC, v)M(x -C,y= 11 M(a -(,b-  V)dCdV  Tj)M(x -(,y-  rj)dCdV.  In order to reduce the computations for a sensor system using multiple pinholes, a multipinhole mask based on the coded-aperture principle was designed. The unique feature about such a mask is that there is a simple mathematical function to describe the relationship between the pinhole locations, the detected light spot pattern and the light source locations.  3.1.1 Multi-Pinhole Mask — Rings Instead of a random arrangement, the multi-pinhole mask design contains pinholes arranged in rings. Each ring is designed to lay on a plane. All the planes are parallel to the sensor plane as shown in Figure 3.10.  23  !  Light Source 5 i  i 21 da'  Multi-pinhole Mask  d CO  g  do' 9 Q  S K> -P».  Sensor  Chapter 3: Multi-pinhole Mask Design Theory  Every ring, as well as each pinhole on each ring, is designed to have its own narrow field of view. The rings are at different elevation angles with respect to the sensor optical axis. On the rings, each pinhole is designated to point at a different azimuth angle. The mask is hemispherical in shape. This is so all the pinholes are approximately at the same distance from the single sensor. Thus, the pinholes have approximately the same angular resolution and the same degree of field of view (see Appendix D). The ring pattern is selected because both the detected light spot pattern and the rings, which the light source projects through, can be described using a simple circle equation. The computer power requirements for such straightforward computations are low.  3.1.1.1 Circular Ring Projection To project a light source through a pinhole onto a sensor as shown in Figure 3.11, is equivalent to joining the light source and the pinhole with a straight line and extending the line to the sensor plane. The equation describing such a line was presented previously in equation 3.6.  25  Chapter 3: Multi-pinhole Mask Design Theory (x, J y, z ) oo 6  v*v Figure 3.11 Projection of A Circular Ring  To show that the image produced by a light source projecting through a ring is a circle, consider that the sensor is laying on the z-plane where zv = 0. Hence, equation 3.6 becomes: X  P  ~ xh  Vp  X0 -Xh  yti  •*h  Vo~ Vh  (3.7)  z0 - Zh  Solving equation 3.7 in terms of x^ and yh, yields two equations: Xp Xh =  +  X  °Zo-Zh  Zo  z o~zh  + 11y°z0-zh Zh  Vh  VP  (3.8)  z  .9  Zo-Zh  Thus, multiple pinholes are put together to form a circular ring with a radius of R. This is defined as:  4 + vl = R226  (3.9)  Chapter 3: Multi-pinhole Mask Design Theory  By writing equation 3.9 using the pair of equations in 3.8, it becomes:  /S+*^A2 + (y>+y^\ \  Zo-Zh  /  \  Zo-Zh  2  = R*.  0M)  /  It is shown in equation 3.10 that when a light spot projects through a ring of pinholes, a circular ring pattern will be detected (see Appendix E). This is true if and only if the circular ring plane is parallel to the sensor plane. Because this relationship can be described using a simple mathematical equation, it requires low computation and makes real-time operation possible.  3.1.2 Multi-pinhole Mask — Pinholes Understanding the principle of the ring configuration is the part of understanding the mask design theory. The other part is the conceptual understanding of the pinholes themselves. Instead of a lens, the camera has pinholes that admit light. The pinholes, unlike a lens, offer complete freedom from linear distortion, infinite depth of field, and a very wide angular field of view where image resolution is not a major factor. The major disadvantage of the pinhole camera theory is that the intensity of the light required for the sensor is relatively high compared with a lens, but this can be largely offset by a high sensitivity sensor and a high optical power light source.  3.1.2.1 Pinhole Camera — Theory of Operation A pinhole camera, in its basic form, has pinholes punched through an opaque material. The image of a distant point light source is simply the shadow of the hole — or rather the shadow of the material around the hole [19]. 27  Chapter 3: Multi-pinhole Mask Design Theory  As such, the pinhole must be a smooth circle; burrs or ragged edges will degrade the sharpness of the image and reduce the sensing resolution. For a given pinhole camera, the image sharpness is dependent on the pinhole size used. When the pinhole size is large, the image from a distant point light source is large and displays a diameter equal to the pinhole diameter. The diameter D of the image point light source made by the pinhole can be expressed as [20]:  D = d+—  (3.11) a  where / is the distance between the pinhole and the sensor center; A is the incident wavelength; and d is the pinhole diameter. As illustrated from Figure 3.12 to Figure 3.14, the diameters of the image made by the pinholes with the radii of 0.5 mm, 0.25 mm and 0.2 mm are approximately 0.4985 mm, 0.2578 mm and 0.2234 mm in diameter, respectively. (Note: All the three figures are in the same scale.)  28  Chapter 3:  Multi-pinhole Mask Design Theory  Figure 3.12 Image Captured Using Pinhole With A Radius of 0.5 mm  Figure 3.13 Image Captured Using Pinhole With A Radius of 0.25 mm  Figure 3.14 Image Captured Using Pinhole With A Radius of .20 mm  It is noted that the smaller the spots, thefinerthe detail that can be described in the point source. Therefore, the best pinhole size is the one that produces the smallest image of the 29  Chapter 3: Multi-pinhole Mask Design Theory  point source. Decreasing the pinhole size will increase image sharpness but greatly increase required exposure time. This is because there is less light to form the image. In addition, the pinhole cannot be too small or it will cause a fuzzy picture due to diffraction. Hence, for the purposes of the sensor system design in this project, the pinhole radius chosen is 0.25 mm.  30  Chapter 4:  System Design  Chapter 4 System Design The previous chapters have reviewed the development of the MPSSS in some detail. This chapter discusses the general configuration and specification of the MPSSS prototype. This sensing system can be divided into two major parts: a target subsystem and a receiver subsystem. The target subsystem is comprised of a light source and the receiver subsystem is comprised of a multi-pinhole mask, a CCD camera, a frame grabber and a computer. Both subsystems are designed independently of each other so that no connections are made between the subsystems. The operation of each subsystem is explained briefly below.  4.1 Target Subsystem The target subsystem is an active, moving, visible pulsed light source affixed to the point of interest to be sensed. Light emitter diodes (LEDs) are used to create the light source in this prototype over other devices such as laser diodes because they have wider emission angle. They also provide over 50,000 hours of operating life and do not generate the heat that laser diodes do. Another feature of LEDs is that they hold up very well to mechanical shock and vibration. To maximize their optical power output, the LEDs are driven by pulsed currents that have a specified waveform at a certain repetition rate, duty cycle and timing. The duty cycle, frequency and the power output of the target subsystem is adjusted using the Figure 4.15 circuitry. 31  Chapter 4:  System Design  Oscillator Adjustable Frequency  ; !  Adjustable Duty Cycle  i j | Adjustable! Power Driver  LfcU  Figure 4.15 Target Subsystem Block Diagram  In the MPSSS prototype, the light source used is a set of Hi-Super bright LEDs. Although the H-3000-L LED is of relatively narrow angle compared to average LED's; it is readily available and has a high optical output power with a luminous intensity of 3,000 mcd. It is also of suitable wavelength at peak emission (660 nm) to match well with the CCD array sensor. (An example of the JE 3362 CCD array sensor spectrum sensitivity is shown Figure 4.16.) These features make it more attractive than some alternative LED's. To widen the emission angle, an array of LED's, each pointing at slightly different angles, were assembled together to form a single wide angle light source.  32  Chapter 4:  System Design  100i  3 Q.  o o  1 —  1  /  90-  1  1  /  LU CO LU  1  \ \ \  50-  z  1  ^^_^^  60-  CO  1  \  /  8070-  i  \  40-  30-  -  V^  >  I-  20-  ^ V  LU  rr 400  450  500  550  600  650  700  750  800  850  WAVELENGTH IN NANOMETER  Figure 4.16 JE 3362 Spectrum Sensitivity  4.2 Receiver Subsystem As mentioned in the beginning of the chapter, the receiver subsystem is made up of functional blocks as illustrated in Figure 4.17.  33  Chapter 4:  System Design [ CCD Array Sensor)  zz.  Multi-pinhole Mask  CCD Camera NTSC Signal  Frame Grabber  Computer (Workstation)  Figure 4.17 Receiver Subsystem Block Diagram  The blocks perform the following functions: 1. Multi-Pinhole Mask — maps the image of the light source onto the CCD array sensor. For detailed discussion of the mask, see Chapter 3. 2. CCD Camera — captures the target image and produces a set of National Television Standard Committee (NTSC) video signals which can be processed to obtain the position of the light source with respect to a predefined coordinate frame. 3. Frame Grabber — converts the NTSC video signal from the CCD camera into a computer readable raster image. 4. Computer — is used as a signal processor in computing the pitch-yaw angles of the light source based on the given raster image from the frame grabber. The following paragraphs give a detailed explanation of each block. 34  Chapter 4:  System Design  4.2.1 Multi-Pinhole Mask The multi-pinhole mask used in this prototype is designed to have a radius of 10 mm which sits 9 mm above the sensor. This mask has a total of 101 pinholes arranged in 3 rings. Each ring on the mask is separated from the adjacent rings by 3.0 mm. This is to ensure the pinhole field of view on the ring overlapping sufficiently with adjacent rings' pinhole field of view to provide a continuity in the sensing range. The radii of the 3 rings are 2.95 mm, 5.64 mm and 7.83 mm. Each pinhole is 0.5 mm in diameter and is separated from adjacent pinholes in the same ring by 1.5 mm. In doing so, this will allow minimum of four light spots projecting through the pinholes to be detected. Every pinhole on the mask is designed to have an angular field of 9.8 degrees measured from each pinhole's own optical axis, yielding a total field of view of 19.6 degrees for each pinhole. The total field of view of the system can be measured in two ways: it is 40 degrees when measured from the sensor optical axis; or 80 degrees overall. All the above analysis values on the mask are given by the computer simulation which need to be calibrated when the mask is constructed and implemented.  4.2.2 Charge-Coupled Device (CCD) Camera The CCD camera is selected because it is superior to the PSD's in its capability to sense the positions of multiple light spots at one time. This capability is needed because the moving light source projecting through the multi-pinhole mask will cast multiple light spots on the sensor. By analyzing the multiple light spot positions on the sensor, the light source position can be reconstructed. The CCD camera consists of a solid-state matrix sensor composed of multiple rows and columns of photosensitive pixels. The pixels are exposed to the light source for every ^ of 35  Chapter 4:  System Design  a second at j$ of a second aperture time. During this time, the pixels discharge the electrical energy stored inside them in proportion to the energy received from the light source. This allows the sensor to detect the relative intensity of the image, which in turn reveals the actual location of the light source. At the end of the ^ of the second period, the charges are reset almost instantaneously back to their original values. CCD cameras are available in a variety of sensitivities to light sources requiring different exposure times and permitting much greater frequency responses. The frequency response of the camera is dependent on the exposure time required to capture the light source energy. In the prototype, the Javelin JE 3362 Chromachip solid-state color camera is employed to capture the image. This camera delivers a 380 line image with a frequency response of 30 Hz. The image sensor used by the JE 3362 is a HE98241 |  MOS color image sensor  which has low sensitivity to infrared. Hence, the H-3000-L LED is selected as the light source because it gives pulses in the visible light spectrum. The characteristics of the Javelin JE 3362 Chromachip solid-state color camera are shown in Table 4.1 [21].  36  Chapter 4:  System Design  JAVELIN JE-3362 Video Color Camera SPECIFICATIONS Color System  NTSC standard  Image Sensor  HE98241 | " single layer MOS color image senosr 576 (H) x 485 (V) pixels  Scanning area  8.8 (H) x 6.6 (V) mm2  Horizontal resolution  380 lines or more  jr ratio  46dB or higher (luminance channel)  Minimum illumination of subject  1.5 lux, F1.4, 3200K  Table 4.1 Javelin JE 3362 Chromachip Solid-State Color Camera Characteristics  4.2.3 Frame Grabber — DIGICOLOR The frame grabber is the device between the CCD camera and the computer that facilitates the transfer of image data from the CCD camera to the computer. In general, CCD cameras are made to generate standard analog TV signals so that they are compatible with other commercial products. Image processing and analysis on a computer require a digital image; to obtain it, the frame grabber is used to convert the TV signal from the CCD camera to a digital image using the horizontal and / or vertical synchronization pulses in the TV signal as reference. In the prototype, a Datacube DIGICOLOR frame grabber is used to import the image data from the CCD camera into a workstation (computer). The input video signal to the DIGICOLOR from the Javelin JE 3362 is the standard NTSC TV signal (RS-170A) and the 37  Chapter 4:  System Design  output of the DIGICOLOR is a 512 horizontal pixel by 480 vertical pixel digital image [22]. The CCD camera clock controls the timing of the DIGICOLOR. The DIGICOLOR uses the input vertical sync pulse information to control the synchronization and digitizing process. Once digitized, images are transferred to the workstation for analysis. One major point which should be stressed is that there must be a match between a CCD camera and a frame grabber. This is important for improving the sensing accuracy. It is emphasized that achieving high accuracy in position sensing using a CCD camera requires the video signal to be sampled synchronously by the frame grabber [23]. Although sufficient for the design purposes of the prototype described in this thesis, it is noted that, because the sampling instants of the Datacube are not synchronous with the pixel clock of the Javelin JE 3362 camera, high accuracy is difficult to achieve. In industrial applications, this problem would have to be addressed.  38  Chapter 5: System Implementation  Chapter 5 System Implementation The objective of this chapter is to explain the implementation of the MPSSS. These are the calculations and procedures which accompany practical applications of the prototype. This implementation process can be divided into two steps: 1. Calibrating process The calibrating process involves establishing the relationship between a predefined, 3-D coordinate frame and the corresponding 2-D image coordinates as seen by the camera. It is also used to calibrate the relative positions of the pinholes on the ring mask with respect to the predefined coordinate frame. 2. Sensing process The sensing process involves deducing the pitch-yaw angles of the light source with respect to the predefined coordinate frame. An implementation block diagram of the MPSSS is shown in Figure 5.18.  39  Chapter 5:  System Implementation (CCD Array Sensor) Multi-pinhole Mask  CCD Camera NTSC Signal  Image 1 1  1 j  IMAGE PROCESSING 1 1  !  '' Threshold And Medium Filter  1 Light Spots Centroiding NUMERICAL ANALYSIS  r  _ _L r _ T ~ir  i —  Curve Fitting  Determine The Pinhole Positions  " Determine The Angular Position  Sensing Process  Figure 5.18 Implementation Block Diagram of MPSSS  The original image of the light source, which is used for both the calibrating process and for the sensing process, comes from a CCD camera. The camera captures the image of the light 40  Chapter 5: System Implementation  source and sends it, via the frame grabber, to a workstation for processing. There, image processing techniques are used to transform the grey scale of a raster image into numerical data representing the detected light spot locations for either calibration or sensing purposes. A detailed description of image processing techniques is presented in the following section. It is followed by a similar elaboration on the sensing process and the calibrating process.  5.1 Image Processing The first step in determining the pitch-yaw angles of the light source is some basic digital image processing, which transforms an 8-bit, grey scale raster image into a table of usable numerical data. The digital image processing consists of thresholding, filtering and centroid computation.  5.1.1 Thresholding and Median Filtering In image processing, thresholding is the most popular technique used to isolate the light spots from their background [24]. An image captured by the CCD camera is composed of light spots on a background of various brightness. By selecting a brightness threshold value T and truncating parts of the image with brightness below that threshold, the light spots can be extracted. A practical problem occurs when the image is subjected to spikelike noise, poor or nonuniform illumination. These conditions make it difficult to select a correct threshold value [24]. To compensate for these conditions, a median filter can be used before applying the threshold technique. 41  Chapter 5: System Implementation  Median filtering is a process that evens out the grey levels of the pixels within a sliding window. For example, if the pixel values within the window are 50,40,100,39,55, then the center pixel (for example, the 100) would be replaced by the median value 50. In this way, the spikelike noise (100, in this example) can be suppressed and threshold technique can be applied to extract the light spots from their background without any problem. After the detected light spots have been isolated from their background, a centroid analysis is employed to determine the centroid positions of the light spots.  5.1.2 Centroid Computation Centroid computation is performed by using the centroid estimator algorithm. This is selected over other techniques, namely correlation and maximum likelihood [25, 26], because the centroid estimator algorithm is easy to implement and consumes relatively little computing power. Using the algorithm then, the centroid of the detected light spot is defined as:  X  =Wy=M~0  where  ^xmix^y)  Mx= ^2 x=l  y=zl  %max Vmax  x=l  y=l  %max Vmax  M0 = Y^ J2m(x,y) X—l J / = l  42  (5 12)  '  Chapter 5:  System Implementation  and m(x,y) is the signal value of the pixel at x, y position on the sensor within the detected light spot. It was found that the centroids of the detected light spots, which were computed using the centroid estimator algorithm, vary with the threshold value. An example of the light spot centers computed using different threshold values is listed in Table 5.2.  Y  X  Threshold Value  +  4  (pixels)  (pixels;  45  37.4569  29.0723  50  36.6604  28.4718  36.5306  28.6326  60  36.4089  28.8526  65  36.4249  28.8414  70  36.4127  28.8404  55  .  Table 5.2 The Computed Light Spot Center Using Different Threshold Values — Centroid Estimator  It shows that the computed centroid varies up to 1.0 pixel using different threshold values. Because of this, a modification to the centroid estimator algorithm was carried out to resolve this problem. 5.1.2.1 Weighted Centroid Estimator The variation in a computed light spot centroid is mainly due to the low intensity pixels with signal magnitudes just above the threshold value [27]. These low intensity pixels have a disproportionately large influence on the location of the centroid of the detected light spot. To overcome this problem, a modification to the above 43  Chapter 5:  System Implementation  centroid estimator algorithm by adding a weighting factor {wXjV) to each pixel as shown in equation 5.13 was made.  1  $max Vmax  1  %max Vmax  x—\ V=MY1J2 x=l  y=l  ywXtym(x, y)  (5.13)  y=l  Xmax Vmax  M =^2  ^2/Wx,ym{x,y)  x=l y=l  To reduce the effect of low intensity pixel values at a large distance, the weighting factor is chosen to be equal to the intensity value of the pixel itself [27] (i.e. wXtV = m(x,y)). In this way, high intensity pixels will influence the determination of the light spot location more than the surrounding low intensity pixels. An example of the light spot centers computed using different threshold values is listed in Table 5.3.  Threshold Value  X  Y  (pixels)  (pixels)  ¥  45  36.6162  29.0289  50  36.4884  28.8178  55  36.4343  28.8943  60  36.3860  28.9848  65  36.3927  28.9837  70  36.3844  28.9856  Table 5.3 The Computed Light Spot Center Using Different Threshold Values — Weighted Centroid Estimator  44  Chapter 5:  System Implementation  Hence, the computed light spot centroid computed using the weighted centroid estimator varies only 0.3 pixel using different threshold values. This shows that the weighted centroid estimator determines the light spot location with less influence by the variation of the threshold value.  5.2 Sensing Process The sensing process involved in deducing the pitch-yaw angles of the light source is illustrated in block Figure 5.19.  Light Spot Position  Curve Fitting  Curve Radius  Radius Matching Ci rve Center Ring Identification  Pinholes Mapping  Pinhole Position ~^ Solving Straight Line Equation  I  Light Source Angular Position Figure 5.19 Block Diagram of The Sensing Process  45  Chapter 5:  System Implementation  Given the position of the detected light spots, the pitch-yaw angles of the light source are computed by fitting a circle to the detected light spot pattern. This is to determine the radius and the center of the light spot pattern. The radius of the circle is used to determine which ring the light source is projecting through, by a process of radius matching. When the ring has been identified, the mapping relationships between the pinholes and the light spots are determined by studying the relative positions of each light spot with respect to the circle center. Once these relationships are found, straight line equations are used to find the pitch-yaw angles of the light source.  5.2.1 Curve Fitting Since the pattern of the light spots produced by the ring of pinholes must form a circle (see Section 3.1.1.1), an equation of a circle is fitted over the computed light spot locations. To find a circle which would fit the computed spot locations closely, the following approach is taken. A general form of the equation of a circle in 2-D plane is as follows:  x2 + y2 + 2dx + 2ey + / = 0  where the center of the circle is at (—d, — e); and the radius of the circle is r = y/d2 + e2 — f. 46  (5.14)  Chapter 5:  System Implementation  To identify the radius and the center of the circle, note that equation 5.14 can also be written in matrix form as:  w = [-2x  -2y  -1]  where w = x2 + y2. This can be further denoted as:  Y = X/?, where W\  Y =  W2 9  '-2xi X =  0-  -2X2  -22/2  -1 -1  — 2T  -2y m  -1  -2j/i  d~ e ; and  f. m is the number of detected light spots. Thus an equation which is linear in the unknown parameters of a circle is developed. The unique least squares estimate f$ of /? in this equation can be derived to be  ?= (x T x) XTY provided that ( X T X )  -l  exists. 47  (5.15)  Chapter 5: System Implementation 5.2.1.1 Existence of A Solution To prove that ( X T X )  will normally exist (hence, $ can  be computed from equation 5.15), where the pinholes associated with the detected light spots are on a ring, note that  (x T x)  = x~1(xT)  Therefore, if X - 1 exists, ( X T X )  =x- 1 (x- 1 ) T  will exist. To show X - 1 will normally exist, assume  (#i, t/i), (x2i j/2) and (2:3,2/3) are the three detected light spots on the a circle satisfying  (xi + d)2 + (yi + e)2 = r2, i = 1,2,3.  (5.16)  Then, X =  X  1  -2x1 —2^2 -2x3  -2yi - 1 — 2j/2 —1 -22/3 - 1  will exist if and only if |X| ^ 0, which is equivalent to \A\ / 0 with x  i Vi f X2 V2 I x3 yz 2 To show I A\ ^ 0 is normally true when (xi, yi) satisfies equation 5.16. We resort to a graphical analysis of |A|. Figure 5.20 shows the plot of LJ- as x$ varies while x\ = X2 and y\ ^ ?/2- This graphical analysis was performed by letting the circle centroid lying on different quadrants of the x-y axis. For example, (d, e) is selected to be (0,0), (2,3), (—2,3), (2, —3) and (—2, —3). Similar analysis was also performed by setting x\ ^ X2 and y\ ^ t/2 as shown in Figure 5.21. From the graphs, \A\ = 0 when either x$ = x\ and j/3 = y\ or £3 = X2 and 2/3 = 2/2. 48  Chapter 5:  System Implementation  5-  0-  V;  IAI r Centroicf (2,-3)  &  Jjandjy or Jjandy  Centroid (-2,-3)  &  Centroid (2,3)  S-m  Centroid (-2,3)  Centroid (0,0)  -10  -1.5  -1  -0.5  0  0.5  1  1.6  X  Figure 5.20 Plot For \A\ With xi = x2 and yi # y2  12 Centroid (2,-3)  Igntroid (-2, -3)  & 10 Centroid (2,3)  &  x  Centroio\(-2,3)  IAI  0.5  1  1.5  Figure 5.21 Plot For \A\ With xi / x2 and yi # y2  49  Chapter 5: System Implementation  This shows \A\ ^ 0 if and only if the matrix A does not have two identical points. In other word, (X T X)  exists, if X is non-singular matrix.  5.3 Calibrating Process The calibration of the MPSSS is not difficult because, given the position of the detected light spots and the light source, all the unknown parameters such as the pinhole positions and the ring radii can be solved. To calibrate the pinhole positions and the ring radii, the light source is moved to various known locations. From the detected light spot positions and the known light source locations, the pinhole positions can be computed. For example, when the light source is moved from (xsi,yai,Zsi)  to (xS2,ya2,za2), (xn,yn) and (xi2,y 12) are the detected light spot locations  as illustrated in Figure 5.22. Based on the positions of the light source and the detected light spot locations, the pinhole position can be deduced by solving the following equations:  fail Zhoie ~  z  sl%s2  ~  ~ ZslXi2  Xi2)ZalZa2 — Za2Xs\  + ZS2X{\  or z  hoie =  (yn - yi2)zs\zS2 : > zsiyS2 - zsiyi2 - zS2ys\ + z*2yn  — Zhole ( x •^hole — \ sl ZS\ Zhole 1  yhoie =  _  Xiij  \ . -f-  ,c . P-i/;  Xn,  \ .  {ysi - yn) + yn,  zs\ where (xhole, V'hole, zhoie) is the pinhole location.  Each ring radius is then determined from the process descried in Section 5.2.1. We calibrated 44 holes in the sector to be measured. 50  Chapter 5:  System Implementation  \2  i2  y  Sensor  'V < (x , y , z ) hole hole hole  Target Light Source (x ,y ,z ) s2  s2 S2  Figure 5.22 Pinhole Calibration Geometry  51  Chapter 6:  Experimental Results  Chapter 6 Experimental Results The prototype MPSSS has been described as to its original development, as well as in terms of a detailed analysis of its constituent parts and system. To reiterate, the sensing system consists of a light source, a ring mask, a Javelin JE 3362 Chromachip solid-state color camera, a Datacube DIGICOLOR frame grabber and a workstation. At this point, it is pertinent to discuss in equal detail the testing of this MPSSS as to its physical limitations and utility.  6.1 Comparison of Camera Lens and MPSSS Comparison tests were performed on the camera with a lens system and the MPSSS in the area of sensing accuracy and field of view. These tests demonstrated that the MPSSS has a wider field of view and higher accuracy in comparison with the 10 mm lens that came with the Javelin camera originally. Figure 6.23 shows the position sensing error of the MPSSS vs. the 10 mm lens sensing system.  52  Chapter 6:  Experimental Results Mean Measurements Taken With 10 mm Lens Mean Measurements After Second Order Polynomial Compensation Mean Measurements Taken With Multi-Pinhole Mask  ACCURACY COMPARISON 4.5  10 mm Lens  4  3.5  E E  3  10 mm Lens (After Lens Distortion Compensation)  2.5  DC  2  o  1.5  DC DC LU  1  0.5 1000  00  LATERAL LIGHT DISPLACEMENT (mm) (z-distance = 1.3 m ) Figure 6.23 Error of The Multi-Pinhole Single Sensor System vs. Normal Lens Sensing System  The two systems are required to sense the relative position of a light source which is moving along an optical bar. Each sensing system was placed at a distance of 1.3 meters from the optical bar. A total of 80 measurements was collected at 20 different locations (4 measurements per location) at 50 mm intervals along the optical bar. Each measurement was evaluated in terms of the error between the true position of the light source and the computed position of the light source. 53  Chapter 6:  Experimental Results  The implementation of the MPSSS prototype inevitably introduced errors in the estimated light spot positions in space. This is because we approximated the size of the CCD array sensor and the mask models. Image acquisition from RS-170A analog video output is not ideal for this application, as the RS-170A allows small geometric distortions and line-edge shifts of several pixels within an image [23]. Such distortions will result in an error in the estimated light spot position. Further, the set of LED (3,000 mcd) light sources is approximated as a point source with a constant radiation pattern. In practice, it is not of constant radiation pattern; thus its centroid varies with the viewing angle at the light source. It is noted that the error in the MPSSS increased as the light source moved away from each ring's own optical axis and the error decreased as the light source moved towards each ring's own optical axis. The error in the 10 mm lens sensing system is partly due to the non-linear effects introduced by the lens. To compensate for these non-linear effects, a second order polynomial least squares compensator was introduced. Even with the compensator, the accuracy of the lens system is still inferior to that of the MPSSS.  xc = a2x^l + a\xm + a0 where ao, a\ and a2 are the compensator parameters; xc is the compensated data; and Xfn is the x axis detected. 54  Chapter 6: Experimental Results  In fact, despite all its design problems, the MPSSS performs quite accurately as shown in Figure 6.23. Even with the modifications to the lens system, the MPSSS has higher accuracy and two times the field of view in comparison to the lens sensing system. (This lens sensing system has an angular field of 15 degrees measured from the optical axis.)  6.2 Processing Time The processing time required to compute the light source position is essential to determine whether the MPSSS is capable of operating at frame rates (30 frame per second). Hence, tests to determine the average process times on different workstations (e.g. SPARC 2, SPARC IPX and SPARC 10) were performed. The results are listed in Figure 6.24.  55  Chapter 6:  Experimental Results  Computation Time Needed To Determine The Target Location With Different Types of Workstations  SPARC station 10  SPARC station IPX  SPARC station 2 0.1  0.2  0.3  0.4  0.5  TIME (in seconds) With Median Filter and Thresholding Without Median Filter and Thresholding Figure 6.24 Image Processing CPU Time With Thresholding And Filtering  The entire process including median filtering takes approximately 0.46 CPU seconds on a SPARC 2, 0.28 CPU seconds on a SPARC IPX or 0.25 CPU seconds on a SPARC 10. The median filter, which is the most costly to compute, needs only to be computed when the signal-to-noise ratio is low. If the optical output power of the light source system is increased, the median filter can be omitted. Without the median filter, the entire process is cut down to 0.26 CPU seconds on a SPARC 2, 0.16 CPU seconds on a SPARC IPX, or 0.14 CPU seconds on a SPARC 10. 56  Chapter 6:  Experimental Results  One point which should be stressed is that the largest part of the computation time is taken up by the parallel image processing task (i.e. thresholding and median filtering). Thresholding is applied independently to the neighborhood around each pixel for a given threshold value, and median filtering is applied independently to the surrounding pixel rows. Given parallel hardware such as a Datacube VFIR MK III image processing board, the median filtering and thresholding can be computed simultaneously and at frame rates. Without the median filtering and thresholding, the time required to deduce the light source position takes only 0.03 CPU seconds on a SPARC 10 as illustrated in Figure 6.25.  Computation Time Needed To Determine The Target Location With Different Types of Workstations Without Thresholding And Filtering  I" • • F I " ' * t « l ™ r F » ' i U'l'IM'l  I' ' I '  SPARC station 10  SPARC station IPX  SPARC station [ 2  0  0.02  0.04  0.06  0.08  0.1  TIME (in seconds) Figure 6.25 Image Processing CPU Time Without Thresholding And Filtering  57  0.12  Chapter 6: Experimental Results  Therefore, frame rate sensing can be achievable given clever coding and sufficiently powerful hardware.  6.3 Resolution Testing The goal of this research includes development of a high resolution position sensing system without increasing costs. A MPSSS which is capable of providing high sensing resolution and a wide field of view requires resolution testing in order to study its sub-pixel sensing resolution capability. In the MPSSS model, the CCD sensor consists of rows of pixels. Each pixel is about 0.0143 mm in length, which approximately corresponds to an angular resolution of 0.0416 degrees. If a light source is 10 meters away from the sensing system, the sensing system will sense the position of the light source within an error margin of + 3.63 mm. To obtain an accurate measurement, it is necessary to somehow resolve the measurement to within a fraction of a single pixel. Hence, the weighted centroid estimator (see p. 43) was selected as the centroid estimator algorithm because of its sub-pixel capability. The tests for studying the sub-pixel resolution capability were performed by moving the light source in the increments of 0.05 millimeters, over a total distance of 4.0 millimeters, along a predefined straight line. These movements correspond approximately to the angular resolution of 0.0022 degrees or 0.06 pixels on the CCD sensor. Three sets of data were collected. Because the sensing system allows a minimum of four light spots to be cast on the sensor, each set consists of four subsets of data for each 58  Chapter 6:  Experimental Results  light source position. An example of one of the light spot's resolution testing is shown in Figure 6.26.  RING 1 ACCURACY MEASUREMENTS RELATIVE LATERAL LIGHT DISPLACEMENT (z-distance = 1.3 m and approximately 0.2 m from the Sensor Optical axis)  LIGHT SPOT 1  4 H Z UJ  2  i  i  2  /  /  / /^/  vS •  3  oc  UI  i  error  UJ  ui E 2— 2  •  / / / /r / Actual Position / / ^ kJ / J  / £ / J  ^Measured Position  1 0  1 2  3  4  ACCURATE MEASUREMENT (mm) Figure 6.26 Ring 1 Resolution Tests — Light Spot 1  The MPSSS is capable of tracking the light source position with a resolution of 0.05 millimeters with an error of 0.5 mm. However, with a proper calibration, this error can be compensated. The standard deviation of the error between the actual position and the measured position for individual rings is listed in Table 6.4. In comparison with other sensing system, our system has a relative resolution of approximately 10 o 0u which is far better than a camera with a lens sensing system, in which the relative resolution is approximately 0.01 pixels or 2-gjjg-[27]. 59  Chapter 6:  Experimental Results  Weighted Centroid (pixels) 0.0643 0.0705 0.0730  Ring Number One Ring Number Two Ring Number Three  Table 6.4 Results of Sub-pixel Resolution of Individual Rings  Although the system resolution is very high (approximately 0.05 mm or 0.06 pixels), it should be stressed that the measurements were made under ideal laboratory conditions. For example, 1. the background light level was low and of constant level, 2. the signal to noise ratio was kept to maximum by pointing the light source toward the direction of the sensing system. This is because the effect of thresholding will reduce the number of pixels processed. High signal to noise ratios allow a threshold to be used without significantly reducing the resolution.  6.4 System Limitations And Suggestions For Improvement The results of the testing show that the prototype is capable of sensing the light source in a large working volume with a high resolution. Unfortunately, the prototype used in this experiment has the following limitations: 1. Only one target light source can be located at a time. Multiple target light sources can be detected only if blinking alternately, or colored distinctively. 2. The sensing device must consist of a two dimensional array of sensing elements in order to locate the several light spots cast on it at one time. Such a sensing device has a finite 60  Chapter 6: Experimental Results  resolution which is limited by the number of pixels on the device. This is a weakness in comparison with other continuous sensing device such as a Position Sensitive Detector (PSD). To overcome this shortcoming, a PSD can be used as well, if the adjacent holes on the mask are covered with various fixed color filters, and the sensing device is equipped with an electronically switchable color filter to selectively locate the light spots one by one.  61  Chapter 7:  Conclusion  Chapter 7 Conclusion 7.1 Summary In this thesis, a MPSSS end point angular position sensing system has been proposed and developed. This work is motivated by the need for a wide angle, high resolution angular detector with which highly precise manipulator end point position measurements can be made. A multi-pinhole single sensor sensing system and associated light source have been assembled and tested as proposed. Much effort has been put in throughout the project to make the sensing system suitable for commercial applications. The results presented in Chapter 6 for the prototype sensing system are encouraging. The developed system is capable of sensing the angular position of the light source in a large working volume (as shown in Figure 7.27 ) with an angular resolution of 0.0022 degrees or 0.06 pixels of the sensor we actually used. Relatively high accuracy and wide field of view can be achieved given the proper hardware and calibration. Limitations of the proposed system and suggestions for the minimization of associated problems have been presented. For example, by using a high power light source we can increase the sensing distance and hence the sensing volume. Also, parallel computing hardware can be used for image processing to achieve real-time throughput. 62  Chapter 7:  Conclusion  Radius 1.3 meter  Sensor y-axis  x-axis Figure 7.27 System Working Volume  7.2 Contributions The main contributions of this thesis are the development and analysis of a high resolution wide angle optical position detecting system. This system has the following important features: 1. the resolution of the sensing system is increased by subdividing the sensed environment into small pieces and mapping each piece onto the entire sensing device; 2. no mechanical scanning devices are required to track the light source; and, 3. the sensing range is wider and the resolution is higher than conventional sensing systems. A prototype of the system was built and tested. A mask designed consisting of rings of pinholes was found to allow detecting the angular positions of the target light source at high resolution and wide angle with very little signal processing requirements. Thus, the 63  Chapter 7:  Conclusion  simplicity of the system developed suggests that commercial application of the system should not be very difficult.  7.3 Suggestions For Future Work In order to minimize the cost of this research, only readily available hardware is used in constructing the prototype of the system. As a result of this, the prototype system is not at its optimal state. To improve the system, we suggest the following further work: 1. Use a surface mount CCD camera. This will allow closer mounting of the mask, thereby, increasing the sensing angle; 2. Use a CCD camera with digital output(s). This will reduce the sampling error between the camera and the frame grabber and therefore increase the sensing resolution; 3. Use a wide emission angle, high power output light emitter such as the Opto Diode OD669 (manufactured by Opto Electronics Inc.) in the light source circuitry to increase the detected signal to noise ratio as well as the working distance between the light source and the sensing system; 4. Use the more powerful Datacube MKF III board for parallel image processing. This will increase the system throughput, hence allowing more real time applications of the system; and 5. Develop a new mask design which is easier to manufacture. This will further reduce the system cost, making the system even more commercially applicable (see Appendix A, B and C).  64  Bibliography  Bibliography [1] S. A. Hovanessian, Introduction to Sensor Systems. Artech House, Inc, 1988. [2] T. Takano and S. Yonehara, "Basic investigations on an angle measurement system using a laser," in IEEE Transactions On Aerospace And Electronic Systems, vol. 26, pp. 657-662, July 1990. [3] S. M. Sherman, Monopulse Principles and Techniques. Artech House, Inc, 1984. [4] R. L. Smith, "Development and testing of an infrared target tracking system," Master's thesis, Electrical Engineering, University of British Columbia, Aug. 1990. [5] F. Blais and M. Rioux, "A simple 3-d sensor," in Optics, Illumination and Image Sensing for Machine Vision, vol. 728, pp. 235-42, SPIE, 1986. [6] N. Alvertos, E. Hall, and R. Anderson, "Omnidirectional viewing for robot vision," in Proceedings of the 3rd International Conference on Robot Vision and Sensory Controls, pp. 309-318, 6 - 10 Nov. 1983. [7] J. Cardillo and M. A. Sid-Ahmed, "A 3-d robot vision system using passive focus information," An International Journal Computer In Industry, vol. 15, pp. 317-328, Dec. 1990. [8] N. Karouche and J. M. Lopez, "A 35 GHz proximately microwave sensor," in IEEE Transactions on Magnetics, vol. 28, pp. 1011-1016, July 1992. [9] S. J. Oh and E. L. Hall, "Guidance of a mobile robot using an omnidirectional vision navigation system," in Mobile Robots II, vol. 852, pp. 288-300, SPIE, 5 - 6 Nov. 1987. [10] R. W. Wood, "Pinhole camera," Physical Optics, pp. 66-69, 1967. 65  Bibliography  [11] P. J. Besl, Advances in Machine Vision. Spring-Verlag, 1989. [12] H. N. Norton, Sensor And Analyzed Handbook. Prentice Hall, Inc., 1982. [13] M. G. Mylroi and G. Calvert, Measurement and Instrumentation for Control. Peregrinus, 1984. [14] Seiko Instruments, Liquid Crystal Optical Device, 1989. [15] K. J. Habell and A. Cox, Engineering Optics. Pitman Publishing, 1971. [16] R. H. Dicke, "Scatter-hole cameras for x-rays and gamma rays," The Astrophysics Journal, vol. 153 II, pp. L101-L106, Jan.-Mar. 1968. [17] T. Ponman, A. Hammersley, and G. Skinner, "Error analysis for a noncyclic imaging system," Nuclear Instruments and Methods in Physics Research, vol. A262, pp. 419— 429, 15 Dec. 1987. [18] G. K. Skinner, "Imaging with coded-aperture mask," in Nuclear Instruments and Methods in Physics Research, pp. 33-40, 1984. [19] S. D. Uslan and K. T. Lassiter, "Pinhole camera," in Encyclopedia of Practical Photography, vol. 11, Eastman Kodak Company, Amphotot, New York, 1977. [20] K. Sayanagi, "Pinhole imagery," Journal of The Optical Society of America, vol. 57, pp. 1091-1099, Sept. 1967. [21] Javelin Electronics, Javelin Service Manual Video Color Camera JE-3362. [22] Datacube, Inc, Datacube Hardware Reference Manual, Nov. 1990. [23] P. Cencik, "Matching solid state camera with frame grabber — a must for accurate gaging," in Optics, Illumination and Image Sensing for Machine Vision VI, vol. 1614, 66  Bibliography  pp. 112-120, SPIE, 1991. [24] R. C. Gonzalez, Digital Image Processing. Addison-Wesley Publishing Company, second ed., 1987. [25] J. T. Reagan, T. J. Abatzoglou, J. Saghri, and A. G. Tescher, "Sub-pixel resolution for target tracking," in Application of Digital Image Processing XV, vol. 1771, pp. 2-19, SPIE, 1992. [26] K. I. Schultz, "An analytic approach to centroid performance analysis," in Laser Radar VI, vol. 1416, pp. 199-208, 23 - 25 Jan. 1991. [27] G. A. W. West and T. Clark, "A survey and examination of subpixel measurement techniques," in Close-Range Photogrammetry Meets Machine Vision, vol. 1395, pp. 456463, SPIE, 3-7 Sept. 1990.  67  Appendix A: Plane Mask  Appendix A Plane Mask The plane mask is a two dimensional aperture mask containing a pattern of pinholes as shown in Figure A.28. The pinholes in this mask are arranged in rings with different radii. Every ring, as well as each pinhole on the ring, is designed to have its own field of view. When the incoming light rays pass through the mask, they are coded by the mask pattern. Because the pinholes are arranged in rings, the detected pattern is in the pattern of rings. From the detected pattern, the pitch-yaw angles of the light source are computed in the same way as the MPSSS.  Figure A.28 Plane Mask  68  Appendix A: Plane Mask  However, this mask has the following disadvantages: 1. This mask provides a very narrow field of view in comparison with the MPSSS. This is because if the field of view approaches 180°, the mask will have infinitely large in length and width; and, 2. In comparison with the MPSSS, the energy received from the light source is reduced by cos (0) where 0 is the radial angle of the target. This is because the pinhole appears smaller by cos (9) because of the obliquity. The major advantage of the mask is that it is relative easy to construct in comparison with the dome mask.  69  Appendix B: Cylindrical Mask  Appendix B Cylindrical Mask The cylindrical mask is an aperture mask containing a pattern of pinholes as shown in Figure B.29. The mask is semi-cylindrical in shape. The pinholes in this mask are arranged in ellipse with different major and minor radii. The ellipse mask pattern is selected because the detected light spot pattern, that the light source projected through, can be described mathematically using a simple circle equation. The pitch-yaw angles of the light source are computed in the same way as the MPSSS.  . •••• • • • • • " « .• • •• »• •• • • • 9 • • • • • • • • • *• • • * • •••  •..« •  • •• • • • t • • • •  .  •  •  MASK MASK PATTERN  Figure B.29 The Semi-cylindrical Mask 70  Appendix B: Cylindrical Mask  However, this mask provides a very narrow field of view along the x-axis in comparison with the MPSSS. This is because if the field of view approaches 180°, the mask will be infinitely large in length.  71  Appendix C: How To Make The Multi-Pinhole Mask  Appendix C How To Make The Multi-Pinhole Mask The Multi-Pinhole Mask is an optical mask which contains a special arrangement of pinholes. Every pinhole on the mask has its own narrow angle view of the different parts of the world. By dividing the world into small pieces via the pinholes and mapping each piece to the entire sensor, these will increase the resolution of the system and maintain wide field of view. The mask is made of epoxy plastic. The following gives a brief description on how the multi-pinhole mask was built : 1. Make a hemispherical mold out of a \" diameter solid rod aluminium; 2. Place the mold on the spindle; 3. Mark the 101 hole positions, arranged in 3 rings, on the mold. Each ring on the mask is separated from the adjacent rings by 3.0 mm. Each pinhole is separated from adjacent pinholes in the same ring by 1.5 mm; 4. Drill the holes using the 0.5 mm diameter drill bit as marked in step 2; 5. Clean, polish and wax the mold. Wax is used for easy releasing the mask from the mold; 6. Coat the mold with the epoxy resin; 7. Slowly turn the mold to even out the epoxy resin on the surface of the mold; 8. Let the epoxy dry, which will take approximately one day; 9. Drill the holes on the epoxy resin as marked by the holes on the mold; 72  Appendix C: How To Make The Multi-Pinhole Mask  10. Carefully separate the epoxy plastic mask from mold; and, 11. Paint the mask with a black paint and we have a multi-pinhole mask.  73  Appendix D: Multi-pinhole Diagram  Appendix D Multi-pinhole Diagram  "Hi  Figure D.30 A Multi-pinhole Diagram  74  Appendix E: Samples of Captured Image and Curve Fitting  Appendix E Samples of Captured Image and Curve Fitting  Figure E.31 Captured Image With Spikelike Noise When The Target Light Source is at 0° Azimuth and 90° Elevation  75  Appendix E: Samples of Captured Image and Curve Fitting  Figure E.32 Captured Image When Target Light Source is at 90° Azimuth and 40° Elevation  76  Appendix E: Samples of Captured Image and Curve Fitting  Curve Fitting Circle Center  Figure E.33 Captured Image With Non-uniform Illumination When The Target Light Source is at 45° Azimuth and 45° Elevation  77  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Country Views Downloads
United States 14 1
China 7 44
Germany 4 3
Japan 4 0
Netherlands 1 1
Canada 1 0
United Kingdom 1 3
Malaysia 1 0
Poland 1 0
City Views Downloads
Ashburn 7 0
Unknown 7 15
Shenzhen 5 24
Tokyo 4 0
Seattle 3 0
Mountain View 2 0
Beijing 2 19
Ottawa 1 0
Ayer Itam 1 0
Everett 1 0
Rijswijk 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065110/manifest

Comment

Related Items