UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Hemispheric vision with resolution enhancement Anderson, Dean M.H. 1998-12-31

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_1999-0002.pdf [ 9.64MB ]
Metadata
JSON: 1.0065287.json
JSON-LD: 1.0065287+ld.json
RDF/XML (Pretty): 1.0065287.xml
RDF/JSON: 1.0065287+rdf.json
Turtle: 1.0065287+rdf-turtle.txt
N-Triples: 1.0065287+rdf-ntriples.txt
Original Record: 1.0065287 +original-record.json
Full Text
1.0065287.txt
Citation
1.0065287.ris

Full Text

Hemispheric Vision with Resolution Enhancement  by Dean M . H . Anderson B . A . S c , T h e University of B r i t i s h C o l u m b i a , 1996  A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T OF T H E REQUIREMENTS F O R T H E D E G R E E OF Master of Applied Science in T H E FACULTY OF G R A D U A T E STUDIES (Department of Electrical and Computer Engineering) We accept this thesis as conforming to the^qtfcjred standard  T H E UNIVERSITY OF BRITISH  COLUMBIA  December 1998 © D e a n M . H . Anderson, 1998  In  presenting  this  thesis  in partial fulfilment of  the  requirements  for  an advanced  degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for copying  of this thesis for scholarly purposes  department  or  by  his  or  her  may be granted by the head of my  representatives.  It  is  understood  that  publication of this thesis for financial gain shall not be allowed without permission.  The University of British Columbia Vancouver, Canada  DE-6 (2/88)  extensive  copying  or  my written  9  ABSTRACT  Representing the whole world around a given point is an important goal i n computer vision and many other applications. M o s t camera lens systems have been designed and optimized to provide excellent perspective images; however, this leaves out all the information behind and around the camera. Previous methods of obtaining a hemispheric (half-world) and omnidirectional (full-world) view of the world have involved fisheye lenses as well as pan and tilt cameras. In this thesis, an omnidirectional vision system using two parabolic reflectors is described along w i t h a technique and apparatus to obtain a higher resolution from an imaging system. T h e system has two parabolic mirrors for imaging. It allows a substantially hemispheric or half omnidirectional view of the world. T w o of these sensors placed back to back would provide an omnidirectional view of the world from a single viewpoint. T h e results from the hemispheric, double parabolic mirror system that we built are shown i n the thesis. T h e m a i n disadvantage of mapping such a large field of view onto a single sensor is the loss of resolution. We therefore describe a method to obtain better resolution using several image frames w i t h control over the displacement of these frames. T h i s involves meshing together several image frames displaced from each other by known amounts. T h e images are obtained from a variable angle p r i s m normally used for image stabilization i n camcorders. T h e results are promising. 11  Table of Contents Abstract  ii  List of Tables  vi  List of Figures  vii  Acknowledgements  x  1 Introduction  1  1.1  The Optical Problem  3  1.2  The Resolution P r o b l e m  7  1.3  Overview of the Thesis  9  2 Hemispheric Imaging  11  2.1  Proposed System  11  2.2  System Parameters  16  2.3  Advantage of Using Parabolic Reflectors  20  2.4  Limitations  21  2.5  M a p p i n g Hemispheric Image into a Perspective Image  22  2.6  Results  23  2.6.1  C o m p u t i n g Requirements  m  25  t.  2.7  Conclusion  25  3 Super-Resolution 3.1  3.2  3.3  27  Theory  27  3.1.1  Approach  27  3.1.2  One Dimensional Analysis  28  3.1.3  Image Formation  32  3.1.4  Methods for Reconstruction  33  3.1.5  Registration P r o b l e m  34  3.1.6  Restoration, Inverse F i l t e r i n g  36  3.1.7  H i g h Resolution Reconstruction  37  Simulation  38  3.2.1  38  Reconstruction w i t h K n o w n Shifts  F i n a l Remarks on Super-resolution P r o b l e m  46  4 A Sub-Pixel Resolution System  47  4.1  Description of V A P  47  4.2  A p p l i e d Super-Resolution  57  4.2.1  60  Centroid C a l i b r a t i o n  4.3  C o n t r o l of the V A P  62  4.4  Experimental Results using ES-750  66  4.4.1  Initial Tests  66  4.4.2  Open-Loop Results  4.4.3  Sampling at Subpixel Intervals  4.5  -.  Issues and Problems  76 80  5 Conclusions 5.1  67  83  Future W o r k for Combined System  iv  84  5.2  S u m m a r y of Results  86  Bibliography Appendix A  88 Aberrations (see Chapter 2)  v  92  List of Tables 2.1  M i r r o r Specifications  20  4.1  U S A F 1951 Resolution Test Chart (lppm)  76  vi  List of Figures 1.1  Image from a Fisheye Lens  4  1.2  Orthographic Projection  5  1.3  Orthographic V i e w of A r i e l ( N A S A web site)  7  1.4  OMNICAMERA(Tm)  8  2.1  C O n i c Projection Image Sensor(COPIS)  12  2.2  V i e w of C O P I S Sensor  12  2.3  T w o M i r r o r Hemispheric Imaging System  13  2.4  Convex Parabolic Reflector  14  2.5  T w o M i r r o r Hemispheric V i s i o n system  15  2.6  T w o M i r r o r Hemispheric V i s i o n System  17  2.7  Hemispheric Scene taken w i t h Double Parabolic System  24  2.8  Perspective Image from Hemispheric V i e w ( F i g . 2.7)  24  2.9  Perspective Image found by Deconvolving F i g . 2.7  24  3.1  Ideal pixel P S F array w i t h non-zero separation  28  3.2  Spatial Signals of the Sensor and Image  30  3.3  Averaging of a higher resolution image sequence  31  3.4  Original 256x256x8bit Lena Test Image  38  3.5  Simulation M o d e l  40  vii  3.6  L o w Resolution 64x64x8bit Lena Image Frame (obtained by applying H 2 to F i g . 1)  41  3.7  H i g h Resolution Reconstructed Image (no deblurring applied) . . .  42  3.8  Wiener F i l t e r e d 256x256x8bit Lena Test Image (from Noisy Frames)  44  3.9  Image Correlation between Frames  45  4.1  Complete V A P Assembly (US 5481394)  4.2  Front and Rear Plates of V A P (US pat. 5481394)  50  4.3  Large P r i s m  51  4.4  One Dimensional Structural V i e w of V A P  51  4.5  Geometry of V A P movement (US pat. 5481394)  53  4.6  Restriction of V A P movement (US pat. 5481394)  54  4.7  V A P actuator unit (US 5481394)  55  4.8  Side V i e w of V A P A c t u a t o r (US 5623305)  56  4.9  O u r Front E n d Differential Position Amplifier  58  4.10  V A P Deflection Angle versus Drive Voltage (US 5623305)  59  4.11  Open Loop representation of V A P  62  4.12  V A P Bode Plots ( U S 5623305)  64  4.13  Closed Loop representation of V A P  65  4.14  D u a l C o i l W r a p p i n g of V A P A c t u a t o r (US 5623305)  66  4.15  Our Control Circuit  67  4.16  Vertical Displacement w i t h Sinewave Input  68  4.17  Horizontal Displacement w i t h Sinewave Input  69  4.18  Horizontal Direction V A P Frequency Response (0.5 V Sinewave)  4.19  Vertical Direction V A P Frequency Response (0.5 V Sinewave) . . .  70  4.20  Square Wave Response  70  4.21  Displacement w i t h Compensating O p A m p  71  vin  .  .  48  69  4.22  V A P run open loop i n one direction  72  4.23  Hysteresis of V A P  73  4.24  V A P Drift over T i m e  74  4.25  V A P Displacement versus A p p l i e d Voltage  75  4.26  Z o o m of Raw U S A F 1951 Test Image  77  4.27  Image Reconstruction  78  IX  ACKNOWLEDGEMENTS  I would like to acknowledge R a y Burge, D r . Greg G r u d i c , Henry Wong, D r . Peter Lawrence and D r .  T i m Salcudean for their help, equipment use, and technical  assistance. In addition, I would like to thank the technical staff who provided me w i t h enormous assistance and patience (Don Dawson, Bruce Dow, Tony Leugner, A l Prince, R o b Ross and L l o y d Welder). Finally, I would like to thank R a y Burge again for his enthusiasm and exceptional help throughout the entire project.  x  Chapter 1 Introduction Hemispheric sensors are being developed[Y 95, Nay96] to allow imaging w i t h a field +  of view of 360 degrees. T h i s means that every lateral direction of a scene can be viewed. T h e idea is to capture the whole scene around a fixed object and create a panorama or an omnidirectional view of the world at that point. These sensors are extremely useful i n such applications as video conferencing and surveillance. W i t h the hemispheric view, several perspective images can be viewed of the world about the camera at the same instant.  A perspective image is one  which is linear and undistorted, as i n normal eyesight.  In a video conferencing  application, a single sensor could sit on the table and the receiver at the other end could pick which participant to view at a given time or choose to view some or all of t h e m at once. Similarly, i n surveillance, the ideal situation is to have a camera that cannot be compromised. N o one can sneak up on or dodge an omnidirectional camera. W i t h normal cameras the security is l i m i t e d if the person being watched knows where it is pointing. T h e most popular solution to this security problem for normal non-omnidirectional cameras is to mount them inside a silvered dome so  1  2 that no one knows if they are being watched. More applications for omnidirectional cameras w i l l materialize from the benefit of seeing the whole scene around a fixed point. T h e y have a larger field of view than the unaided eye and as such can image more of the surroundings than a casual observer or a conventional camera. W i t h zooming capability, the possibilities for use increase. In this thesis we w i l l discuss a new design for an omnidirection camera. We then discuss a system that can be used to overcome the m a i n l i m i t a t i o n of the camera, which is a lack of resolution. The first attempts at omnidirectional imaging involved a fish eye lens[H 86]. These +  lenses were built as an extreme wide angle lens and are still popular today for many applications. M o r e recent methods have involved pan and tilt cameras[KA96]. These are cameras mounted on a platform containing two joints that can be panned and t i l t e d to capture the scene at each angle. T h e multiple scenes so derived are pieced together to create either a panorama(when there is only panning and no tilting of the camera) or an omnidirectional view. T h e pan and tilt is popular i n speed independent applications such as real estate imaging. In this application, the client can then look at each scene around different rooms or landscapes without visiting the site. Clearly, it is not important for the views to be i n real-time. Recent omnidirectional cameras are using mirrors to perform the imaging. T h e y are formed as standard conic sections and offer the best promise for less distortion of the reconstructed perspective images. O u r system is part of this subset of omnidirectional sensors, the advantages of which w i l l become apparent to the reader later i n this thesis.  3  1.1  The Optical Problem  Conventional imaging systems are very l i m i t e d i n their field of view. Cameras are designed to image plane perspective scenes w i t h no distortion i n the image plane. A t t e m p t s to recover a hemispheric scene originated w i t h fish eye lenses. We propose a system w i t h two parabolic mirrors to accurately image a scene about a complete hemisphere. In addition, a resolution recovery scheme is proposed to compensate for the loss of information inherent i n imaging such a large field of view. P u t t i n g two of our cameras back to back allows almost a full omnidirectional view.  Fish Eye Lens The common fish eye lens, such as that seen on door peepholes, is a lens designed for a wide field of view. T h e problem w i t h these lenses is that it is difficult to make two lenses identical i n diameter and focal length that have the same distortion pattern[Nal96]. It is difficult to determine how the distortion affects the image. Perspective images cannot be recovered from a fish eye lens without knowing the exact geometry of the lens or the exact distortion caused by the lens, which varies from lens to lens and takes considerable time to measure. F i s h eye lenses are also notoriously expensive to manufacture. A scene imaged from a fisheye lens([SA94]) is shown i n F i g . 1.1.  Pan and Tilt Another approach for the imaging of a hemispheric scene has been the use of a pan and tilt camera. T h e camera is essentially mounted on a platform that can be pan and t i l t e d to acquire images from a full hemisphere. T h e images are then  5  Projection plane (top view)  (front view) Figure 1.2: Orthographic Projection stitched together to recreate the hemispheric scene. T h i s method suffers from serious . problems. Stitching the scenes together is difficult because the overlapping image boundaries must be identified for each scene. T h e mechanical nature of the mount used for panning and tilting can also create accuracy problems.  There can be  calibration problems i n keeping the center of the imaging surface at the center of movement.  T h e two m a i n disadvantages of this system are the required high  mechanical accuracy and the slow speed. A stepping motor must be used and it cannot be driven fast or backlash w i l l plague the response.  6  Omnicamera The  Omnicamera(Tm) (Fig.  1.4) was developed by Shree Nayar at C o l u m b i a  University[NPB98]. H e investigated omnidirectional catadioptric (which means a combined mirror and lens system) sensors using single mirrors w i t h a conical cross section across the optical axis. T h e goal was to find systems w i t h a single viewpoint. T h e proposed solution was to use a single parabolic p r i m a r y mirror. Above the primary mirror, a camera system w i t h an orthographic lens is used to image the parallel lines from the lower mirror. A n orthographic lens is a telecentric(centred on the optical axis) lens that images the full side of the object w i t h the line of sight perpendicular to that object. F i g . 1.2 shows several orthographic projections of an object[Spe84]. This type of view is commonly used i n satellite imaging of planets (Fig.  1.3).  In the Omnicamera, the lower mirror i n his system performs the same purpose as the one we describe later, which is to allow for a single viewpoint. T h e orthographic lens must be the same size as the lower mirror i n order to get the full image. Because of the imaging geometry, the camera must be mounted outside the dome (see F i g 1.4). T h i s is a disadvantage because the camera must be large to accomodate the orthographic lens size and is therefore easily detectable, damaged or knocked out of alignment due to its position. In addition, the orthographic (telecentric) lens is expensive and requires a fair amount of calibration to set up w i t h the camera and align w i t h the optical axis.  7  Figure 1.3: Orthographic V i e w of A r i e l ( N A S A web site)  1.2  The Resolution Problem  In many imaging situations it is desirable to get a higher resolution than is obtained through the standard optics. This is the especially the case in wide field of view systems. Every optical system, however, will have a l i m i t i n g resolution based on the optics or the detector grid spacing and size. Even the most expensive components degrade the resulting imaged view of the real world to some extent.  For further  resolution improvement, it becomes necessary to choose a different approach than just purchasing the best optics and detectors. Resolution is an often misunderstood term. T h e common misinterpretation involves thinking it is the pixel size of an image. A fixed 320x320 picture does not necessarily  8  Figure 1.4: O M N I C A M E R A ( T m ) have its resolution doubled along each dimension when it is resized to 640x640. A n image manipulation program simply doubles the pixel size of the original image. There is no new information to be obtained. Resolution of a picture is measured by the bandwidth of the signal from the sensor (lines per m m or otherwise) [Gro86]. If one defines 'resolution' as the number of lines visible i n the image divided by the field of view, there are three factors that affect the resolution. First is the sharpness of the optics. Second is the number of pixels in the sensor used to cover the  field  of view. The t h i r d factor affecting resolution is the size of each pixel. The smaller the pixel size, the more ideal the sampling but the lower the light sensitivity of the camera. Resolution as defined in this thesis is measured in lines per millimetre.  9 Recovering the actual real world from a single low resolution image is not possible because the optics and sensor w i l l always have a low pass filtering effect on the image. W h a t is needed is to use m u l t i p l e images i n a constructive manner to recover that lost information. One  approach for recovering high resolution is to look at several low-resolution  images of the same specimen. If each image is shifted from each other by a known amount, a higher resolution reconstruction can be done. W h e n the amount of shift is not known, it is possible to model the registration mismatch to determine the relationship between images[KBV90, M S 8 8 , SPS87]. In summary, a l l components of the optical system affect the final image and image resolution.  T h e m a i n degradation however, w i t h today's quality optics, almost  always occurs at the detector or sensor[Luk66, Luk67]. T h i s is particularly the case for C C D or C M O S cameras and imaging systems [Nor78]. In this thesis, we w i l l model the detector and use a linear procedure to recover the high resolution by using multiple images.  1.3  1  Overview of the Thesis  T h i s thesis directly addresses the problems identified i n the state of the art by introducing a new optical approach to solving the problems i n a hemispheric imaging system. F i r s t , i n Chapter 2, a hemispheric imaging system is proposed. T h e issues involved in imaging a complete scene, including angle of view, blindspot and resolution, w i l l be examined. Finally, our proposed system is described as we constructed i t . The theory for obtaining higher resolution from low resolution frames, also known  10 as the super-resolution problem, is described i n Chapter 3. T h i s is followed by a test of the theory using standard image-processing test images. Subsequently, i n Chapter 4, a method of controlling an optical system to carry out the increased resolution is described.  It involves using a p r i s m system w i t h  electronic feedback control through P S D sensors to shift i n d i v i d u a l image frames. T h e subsequent shifted images have a known mismatch and can be pieced together in a manner to get improved resolution. T h e final chapter summarizes the results and briefly describes how the two systems should be integrated for an o p t i m a l imaging system.  Chapter 2 Hemispheric Imaging 2.1  Proposed System  T h e newest systems for hemispheric imaging involve the use of a single mirror w i t h a conic section of revolution. Y a m a z a w a et al. have proposed an omnidirectional image sensor for robot navigation called C O P I S ( C O n i c Projection Image Sensor)[Y 95] using a conical sensor sitting on top of the robot. +  T h e robot w i t h  the sensor attached is shown i n F i g . 2.1 and a close-up of another version of the same mirror system, not attached to the robot (and mounted i n a different dome), is shown i n F i g . 2.2. Another researcher, Shree Nayar, has expanded the C O P I S research by studying the family of omnidirectional sensors that use one mirror w i t h a conical cross-section[Nay96]. He found that the best results occur w i t h a parabolic reflector. His final system uses one parabolic reflector w i t h an orthographic lens and camera above the mirror ( F i g . 1.4). We propose a system w i t h two parabolic mirrors as shown i n F i g . 2.3 and F i g . 2.5. It is similar to Nayar's system but does not require a large and expensive orthographic  11  Figure 2.2: V i e w of C O P I S Sensor  Figure 2.3: T w o M i r r o r Hemispheric Imaging System  14  focus  I  I Z Figure 2.4: Convex Parabolic Reflector  lens. The first mirror or primary mirror i n the system is a convex paraboloid. F i g . 2.4 of the primary mirror shows that rays passing through the focus are reflected parallel to the mirror's axis of symmetry. The diameter of the p r i m a r y mirror is equal to four times the focal length, allowing points on the horizon to be imaged. The equation for the primary is that of a standard parabola, z\ = ^jyj , where hi is -  the radius ^J(xi) + (j/i) from the vertex (origin) of the primary. 2  2  The upper or secondary mirror is a concave parabloid w i t h the same clear aperture(diameter) as the bottom mirror, but w i t h a longer focal length. equation, z  2  —  where hi is the radius \Jix-i) + (y2) 2  2  It has the  from the vertex(origin)  of the secondary mirror. The parallel rays reflected from the p r i m a r y mirror are reflected off the secondary and converge at its focus.  A pinhole or a small lens  located at the vertex of the primary is located at the secondary mirror's focus. This allows light to pass through the primary mirror. A sensor located under the p r i m a r y mirror detects the image formed by the mirror system.  Figure 2.5: T w o M i r r o r Hemispheric V i s i o n system  16  The pinhole has the purpose of blocking out any stray light which would produce noise i n the image. D r i l l i n g a pinhole directly on top of the primary parabola is very difficult. Therefore, it is proposed to have a larger aperture i n the primary. This aperture is then covered by a sheet w i t h a pinhole i n it. This makes calibration easier without the possibility of ruining the primary mirror by drilling an off-center hole. T h e focal length of the secondary mirror is equal to the distance from the secondary vertex to the pinhole. A l t h o u g h the new pinhole location is not right at the primary vertex, it is still located on the optical axis. The image formed by the system has a single viewpoint (located at the primary mirror focus) and provides a hemispheric field of view. A s outlined by Nayar [Nay96], a single viewpoint allows for demapping of the omnidirectional image into perspective images w i t h the geometry of the scene preserved. A raw C C D or C M O S sensor can be placed below the pinhole i n the primary mirror.  '•;  Perspective images are formed by first choosing the viewing direction and then using appropriate software to decode the simple mapping done by the mirrors. In addition, by putting two of the systems back to back, a complete omnidirectional system is achieved w i t h m i n i m a l blind spots.  2.2  System Parameters  Figure 2.6 shows the entire two mirror system and two rays. In the system, both mirrors have the same clear aperture (defined as the optical diameter of the component).  T h e clear aperture of the primary was chosen so that a ray (ray a i n  the figure), from the horizon (at 90 degrees w i t h respect to the optical axis) would pass through the focus such that it is reflected parallel to the optical axis. T h i s prerequisite requires that the clear aperture of the primary (and subsequently of  Figure 2.6: T w o M i r r o r Hemispheric V i s i o n System  18 the secondary because they share the same clear aperture) be 4 / i . T h e parameters involved w i t h Figure 2.6 are defined as follows:  hbi : fi : 6  :  Zxi  PaS, Pbs '•  6 :  focal length of m i r r o r i  (2-2)  radius of sensor surface  a  s  (2-1)  distance from mirror i i n z-direction to where ray X hits the mir(r2ir3)  p :  f :  radius of ray b on surface i  (2-4)  radius of ray a, b on sensor surface  (2-5)  distance of sensor from P (defined as the sensor focal length)  (2.6)  angle subtended from optical axis by ray b  (2-7)  T h e symbol ' P ' i n the diagram indicates the pinhole at the vertex of the primary. T h e symbol ' S ' i n the diagram shows the location of the sensor ( C C D ) below the primary mirror. T h e figure shows that there is a blind spot where the environment cannot be imaged. This b l i n d spot caused by the upper mirror is centred about the optical axis of the system. T h e b l i n d spot diameter, 2 * pbs, can be found by looking at ray b, which was chosen to show how the secondary mirror blocks the imaging. F r o m ray b, the following equation can be written to determine the blind spot diameter on the C C D surface: tanfl=  2  /  (2.8)  1  J1 + J2-  where 6z  2  0Z  2  is the local (with the local mirror apex as the origin) z coordinate of  where ray b grazes mirror 2. This coordinate, 8z , is found from the equation for 2  the parabolic surface of m i r r o r 2. Substituting h — 2 / i , simplifies the equation for 2  the point i n question to 8z  2  =  T h e equation for the b l i n d spot angle (where  19 this angle specifies the ray which creates the b l i n d spot on the C C D ) then becomes: tan0 =  2  ^  ,,,  (2.9)  2  Now consider m i r r o r 1 w i t h the focus as the origin. A n equation for the radius of a point on the mirror w i t h respect to the axis of symmetry (optical axis) is as follows (standard parabolic equation as defined from the focus):  fti = , ^ . sing 1 + cos 6  (2.10) '  2  v  A n approximation of how effectively the C C D surface is used can be found by looking at the radius of the extreme rays a and b on the imaging surface. For ray a the following can be written by using similar triangles: Pas  2/  (2.11)  a  Surfaces 1 and 2 are parabolas. Therefore,  «*> = ^ where f  s  (2-12)  is the distance of the imaging surface to the center of the pinhole or lens  at the apex of surface 1. Using similar triangles, a similar equation can be written for ray b:  Js  h - oz  b2  which simplifies using the parabola equation for surface 2 (z  2  ,  , _ _  r 6S = /  2/1/2  =  to:  . ,  (2 14)  s  20  P r i m a r y Focal Length ( / i )  1.02 c m  Secondary Focal Length ( / 2 )  7.24 c m  P r i m a r y M i r r o r Diameter ( 4 / i )  4.06 c m  Secondary M i r r o r Diameter ( 4 / i )  4.06 c m  M i r r o r Separation ( / 2 )  7.24 c m  Blindspot Angle (0)  14.0 deg  P i x e l Use Fraction ( E q . 2.15)  0.35  Table 2.1: M i r r o r Specifications Recall that pis is the blind spot radius on the C C D . In one dimension, the number of pixels that are covered can be found from the ratio T ^ v J ^ i r ^ r ^ r m -  ^° f°  r t  w  o  dimensions, the number of pixels that are covered i n a  square C C D of dimension / by / is:  P  (2.15)  where N is the number of pixels. Table 2.1 summarizes the mirror parameters for the system.  2.3  Advantage of Using Parabolic Reflectors  Having a single viewpoint is advantageous because a l l the information from the environment is seen from this reference point. If there were more than one viewpoint, as is the case for C O P I S and fish eye lenses (where each scene.point is imaged as viewed from slightly different positions), then the world representation would require  21  mapping a l l images into a common reference frame. It would also require the depth of points i n the scene to be estimated or determined. This makes omnidirectional mapping difficult. A sensor that can view the world from a single viewpoint allows the construction of perspective images easily. A l l that needs to be done is to look and find the mapping from the sensor to the point that is to be examined a certain distance (also called the focal depth) away from the sensor [Nay 96].  A s shown by Nayar[Nay96], the  easiest mirror curvature to obtain a single viewpoint i n a catadioptric system is the parabola. In our system, the p r i m a r y mirror is also a parabola. In addition, the secondary mirror is a parabola that focuses those rays that are reflected parallel to the optic axis through the aperture i n the top of the primary. T h e mapping is simple because the parallel rays reflected from the p r i m a r y are those rays that are aimed at the focus of the primary. T h e single viewpoint is evident and perspective images can be found by finding the ray intersections of a given perspective plane on the p r i m a r y mirror. T h e final hemispheric image obtained from the C C D is i n fact just this but scaled down proportionally by the upper or secondary mirror.  2.4  Limitations  T h e advantages of our system have previously been examined. In summary, they are as follows: a single viewpoint, a reduced blind spot, a hemispheric field of view, no complex lenses, an omnidirectional field of view w i t h two cameras back to back and compact design. T h e limitations come from several considerations. In reality, the C C D has a fast exposure time. T h e pinhole imaging technique requires long exposures. For some applications, however, real-time imaging is a necessity. In this case, the pinhole must  22  be replaced by a small lens. In fact, pinhole C C D cameras on the market have a small pinhole lens i n them. T h e lens is inexpensive and does not degrade the image significantly. For our system, using a pinhole lens w i l l not ruin the fundamental imaging properties of the system as a whole. A l l that needs be done is choose a focal length to image the scene onto the C C D sensor a fixed distance away. Another problem is w i t h the C C D sensor itself. C C D s and C M O S sensors typically have 512x512 or 480x640 pixels. W h e n the mapping to perspective images is done, the number of pixels is reduced below this number.  So to achieve similar image  quality on the perspective images as conventional cameras, it is necessary to have a C C D w i t h more pixels. This l i m i t a t i o n i n C C D resolution also hampers the quality of images obtained from the O m n i c a m e r a and C O P I S mentioned earlier.  2.5  Mapping Hemispheric Image into a Perspective Image  One of the benefits of having a hemispheric image is that perspective images can be formed i n any direction. T h e image is a compact storage of an entire scene. T h e perspective image can be formed by specifying the viewing direction, the focal length and the desired image size. Once the above parameters are set, it becomes easy to produce the desired perspective image. First, the center of the omnidirectional image must be known or determined. After this is known, the radius of the omnidirectional image is found i n pixels. Using the desired viewing direction, focal depth (zoom distance away from the viewpoint) and image size, the hemispheric image is stepped across i n increments and mapped to a new perspective grid. T h e resulting data is interpolated to  23 form a final digital image. A brief description follows:  • F i n d the radius and the center of the hemispheric view • Choose the number of steps to cross the image • Choose the image size and location (angle of view) as well as the focal depth • Define a m a t r i x of X and Y co-ordinates on the perspective image • M a p the perspective image points from the hemispheric image using the parabola equations and the inputs given above • Interpolate the points (from the previous step) to the corresponding ones on the hemispheric image to form a perspective image  2.6  Results  Using a standard stock lens, a hemispheric scene was imaged. T h e image is shown i n F i g . 2.7. T h e results would be better w i t h a better C C D camera w i t h more pixels (such as the recently released M E G A - p i x e l cameras w i t h 16 times as many pixels). In addition, the fraction of pixels used on the C C D (see E q . 2.15 derived earlier) was only 0.35. T h e camera could not be brought close enough inside the p r i m a r y mirror because there were obstructions on the camera board thereby leaving m u c h of the C C D surface unused. A custom C C D would have been required to fit the mirror system properly. U s i n g the interpolation algorithm, perspective images were obtained w i t h a software unmapping. These are shown i n F i g . 2.8 and F i g . 2.9. A n y direction can be chosen  Figure 2.9: Perspective Image found by Deconvolving F i g . 2.7  25  w i t h similar results. Near the lower extremities, a larger scene area is mapped to the fixed C C D space. T h e pipes on the ceiling i n F i g . 2.9 are straight as expected, but they are not very clear. T h i s is because of the lack of resolution and the unfinished calibration of the system. T h e lens d i d not map the hemispheric image onto the whole C C D surface. In fact, much of the C C D surface was wasted. T h e results do show the apparatus works and that better results would be obtainable by using a better C C D and having a pinhole lens w i t h a focal length matched to the system so less of the C C D surface was left unused.  2.6.1  Computing Requirements  T h e unmapping was done using the M a t l a b ( T m ) m a t r i x manipulation program. Performing an unmapping to a 70 by 128 pixel image took 544047 floating point operations (flops) on a S P A R C 5 w i t h a 8 5 M H z microsparc II processor.  This  corresponded to a C P U time of 18 seconds. For a m by n image, the computing power order of magnitude i n flops is O(60mn).  Using the data above, a 640 by 480  picture would take about 1.9E7 flops. W i t h a fast enough processor and a good compiler, real-time unmapping may be possible at 640 by 480 resolution.  2.7  Conclusion  A l t h o u g h double parabolic systems have been used extensively i n astronomical telescopes, they have yet to be used for wide angle imaging. We have proposed a lenless or pinhole lens imaging system for hemispheric imaging. It can be easily manufactured using two inexpensive plastic mirrors and a single image sensor. T h e design  26  is much less cumbersome than Nayar's, yet it still provides similar image quality for a C C D w i t h the same number of pixels, and a single viewpoint for distortion free perspective images. A n omnidirectional (360 degrees i n every direction) system is possible by putting two of our systems back to back, but this would require two sensors. T h e m a i n application of our system would be for security and videoconferencing. T h e system would be fully integrated and not require any external mounting of cameras. A preliminary sketch of the system without opto-mechanical components is shown i n F i g . 2.3  Chapter 3 Super-Resolution 3.1  Theory  T h i s chapter deals w i t h the problem of resolution enhancement. In the previous chapter, a hemispheric imaging system was described for use w i t h a C C D sensor. T h e m a i n drawback is w i t h the l i m i t e d number of pixels on the sensor which means less information to create the perspective images a person would see normally w i t h their own eyesight. T h e chapter starts w i t h the simplest case, a one dimensional analysis. T h e more general problem i n 2D is then discussed and a simulation performed using standard test images.  3.1.1  Approach  For our resolution simulations, we w i l l assume a knowledge of the high-resolution image.  T h e degrading of the image by the sensor point spread function ( P S F )  w i l l also be known or approximated. Restoration of images without knowledge of  27  28  A Figure 3.1: Ideal pixel P S F array w i t h non-zero separation  the imaging procedure has been well studied i n the literature[MS88, K B V 9 0 ] . H i g h resolution reconstruction is different from restoration i n that a higher final sampling rate and higher detail can be achieved. For the purpose of our analysis, we w i l l assume that the image incident on the sensor array is continuous. T h e sensors used for imaging have a point spread function (PSF)  w i t h a much larger magnitude than a single object point source. A typical  C C D sensor has pixels separated by 13 to 20 \irn.  E a c h pixel has a P S F that is  very small but non-zero w i d t h . We will assume that the resolution of the picture is l i m i t e d only by the P S F of the detectors. This is a reasonable assumption for C C D cameras as outlined by Gross[Gro86].  3.1.2  One Dimensional Analysis  A linear sensor w i t h ideal pixels is shown i n F i g . 3.1. A n actual sensor w i l l have pixels w i t h a non-zero point spread function. T h e non-zero w i d t h P S F of an i n d i v i d u a l pixel is important to consider because it imposes the final l i m i t on the resolution improvement. T h e resolution cannot be improved by more than this P S F [ J R 8 4 ] . Consider a linear image, y, formed on a pixel array. It w i l l be assumed that each pixel has identical an P S F , h, and that they are separated by A . T h e actual object,  29 x , is a continuous function. T h e image can then be written as: M - l  (z)  =  y  h(z)  * x(z)6(z  - nA)  + v(nA)  (3.1)  71=0  where M is the number of pixels, * is the convolution operator, and v is the observation noise. F i g . 3.2 shows what the spatial signals of the sensor and the image might look like. Notice the large space between the pixel P S F s . W i t h o u t this space, there could be no resolution improvement w i t h the method we describe later.  Luckily,  this is a valid assumption as most C C D s have large separations between relatively sharp pixel PSFs[Gro86]. W i t h just a single image, it is not possible to recover the high resolution content of the original scene. If several images are taken, and the displacement of these images relative to each other is known, then it is possible to recover a higher resolution. Because the i n d i v i d u a l detectors are non-ideal low pass filters, the high frequency information is contained i n the original image i n the form of aliasing. If the individual detectors were perfect low-pass filters, then it would not be possible to recover the lost details. Consider now M-i (z)  Vi  =  h(z~  &i) * x(z)8(z  - nA)  + v(nA)  (3.2)  71=0  where yi is the image frame at a displacement of a - relative to the base frame(ao = t  0). Suppose that each frame is displaced from each other by a fraction of the pixel size of the low resolution frame,  If there are L distinct frames taken at different  displacements but w i t h a ; less than the pixel size of y (i.e. between the low-res pixels), then it is possible to achieve a I D resolution improvement by a factor of L.  A n example of the above follows. Figure 3.3 shows (in a simplified manner) how yi averages the high resolution pixel sequence x. Let y = [?/i(0),?/i(l),yi(2),?/i(3)] x  (a) Single Detector Response  (b) Sensor A r r a y Impulse Response  (c) Image Signal  t y  (d) L o c a l Response of Sensor to Image Figure 3.2: Spatial Signals of the Sensor and Image  31  2  Yl(0)  2  Yl(D  2  Yl(2)  2  Sensor Frame 1  Yl(3)  I x(0)  x(l)  2  x(2)  1  1  2  x(3)  x(4)  0  I I  x(5)  3  I I  YiO)  2  2  x(6)  4  x(7)  1  2  I I  1  Yi2)  2  Y2d)  2  Y2(3)  Sensor Frame  Figure 3.3: Averaging of a higher resolution image sequence where y ( 0 ) = ' a  W  W  ,  = 5&HsGl, (2) = £ l 2 l ± £ i l i yi  consider that the sensor frame is shifted by a  )  ; y i ( 3  =  El5l±£M.  N  o  w  — A where A is the spacing be-  2  tween the pixels of x. We then have y = [2/2(0), 2/2(1), 2/2(2), 2/2(3)] where 2/2(0) = 2  E i 9 l ± £ i l i ( l ) = £i2l±£(3l (2) = £ill±£(M (3) )y2  j2/2  Y2  =  £(6l±^n_  I f  A  N  D O N L Y  I F  ^ . J J  I S  known to be zero, then the above gives a solvable set of equations for the high resolution image. Otherwise, shifting the sensor w i l l result i n extraneous information being mapped onto the edge pixels, thus corrupting the procedure. If the images, ?/i, had larger pixels, then more images would be required to recover the resolution. A n example of the process for 1-D reconstruction w i t h known shifts follows, using an image at the higher level, x = [2,1,2,0,3,4,1,2],  finite i n extent such that  x(n) = 0 : n < 0 , n > 7. The low resolution images y\ and y are obtained as above 2  but w i t h the second image y shifted by one half pixel to the right w i t h respect 2  to 2/i- T h e result is y± = |[2,3,3,5], and y  2  to a larger grid yields y  mesh  |[2,3,3,2,3,7,5,3].  = |[3,2,7,3].  Meshing the series  = |[j/i(0), j/ (0), 2/i(l), y ( l ) , 2/i(2), y (2), 3/1 (3), y (3)] = 2  2  2  2  T h e same result, y eshi could be obtained with the simple z m  32 transform filter, Y h(z) mes  —  K T h e inverse filter is then simply ^ _ or 1 +  In  t  the space domain this gives a recursive relationship for x(n) as x(n) = 2y h(n)  —  mes  x(n — 1). Using x(—1) = 0, the original series x(n)  can be obtained perfectly. In  practice, however, the actual image cannot have perfectly sharp boundaries. other words, x{n)  In  would be infinite i n extent; and this would cause some error  when the above method is used. Interpolation is therefore used for real images as described later.  3.1.3  Image Formation  Imaging always involves the application of non-ideal sensors that degrade the opt i m a l obtainable image. T h e narrower the P S F of the optics and sensor system, the better the resolution that can be achieved. T h e degradation of the ideal image x(j, k) by a digital imaging system can be modeled as follows:  M - l  yi(jj )=  N-l £  k  m=0  x(m,n)h(j  - m , k - n ) + n(j,k)  (3.3)  n—0  where h(j, k) is the degradation from the imaging process, n(j, k) the additive noise, M  is the number of pixels i n the j (horizontal) direction, iV the number of pixels  in the k or vertical direction, and y;(j, k) the ith observed low-resolution image. Equation 3.3 can be rewritten i n vector-matrix form as:  y = Hx + n  (3.4)  Ideally, the degradation should be m i n i m i z e d by using the best possible optics and sensors.  Unfortunately, real detectors can only achieve a given P S F and become  very expensive when they are of high quality. A s mentioned earlier, we assume  33 that the sensor P S F is the primary l i m i t on the resolution. O u r goal is to recover the high-resolution image (by upsampling the image and increasing the m a x i m u m represented spatial frequency), x(j,k), from several low-resolution images.  The  following summarizes the issues i n the imaging system:  • Sensor PSF: blurs the image • Undersampling: results in aliasing errors • Noise  3.1.4  Methods for Reconstruction  Reconstruction is possible i n both the spatial and frequency domains. Currently, the preferred technique is i n the spatial domain[ST90] because lower noise and higher resolution have been achieved spatially. Spatial domain techniques include projection onto convex sets(POCS)[ST90], tomographic backprojection[IP91], interpolation, and m i n i m i z a t i o n of error[TG94]. T h e frequency domain approach involves spectrum cancellation and eliminating aliasing artifacts[KS93]. T h e frequency dom a i n method is currently only used when there is a registration problem. T h e reconstruction problem has three parts to it. T h e first is part is image registration. T h e amount of shift between the low-resolution frames must be determined to some accuracy. T h e second part is the restoration and filtering of the frames to compensate for degradation and noise.  Finally, the high resolution image is  reconstructed or pieced together using interpolation or other techniques. A l g o r i t h m for high-resolution reconstruction of 2-D images:  1. Registration: determine how much each frame is shifted from each other  34 2. Restoration or Filtering: remove the degradation and noise from each lowresolution frame 3. Reconstruction or Interpolation: use the results of above to piece together a high-resolution image  It turns out that the above three steps are interdependent.  There have been at-  tempts to combine a l l three steps i n a global procedure, but so far only steps 1,2 and 2,3 have been combined (in separate instances) successfully[KBV90, TG94] (in the spatial domain).  3.1.5  Registration Problem  Known Shifts In the ideal case, we would know the exact amount that each low resolution image is shifted w i t h respect to the reference frame. This is only possible when the imaging process is directly controllable. W h e n the sub-pixel shifts are random, the problem is more difficult and becomes one of registration. O n l y interframe translations of the low-resolution images w i l l be considered. R a p i d movement imaging systems, such as aircraft radar, w i l l have to treat the case of translations w i t h frames not spaced at sub-pixel amounts. Based on the assumption that resolution only depends on the number of pixels, an argument can be made to determine the improvement i n resolution. W i t h control over the subpixel shifts, it is possible to achieve a resolution improvement of \[L where L is the number of low resolution frames[TOS92]. T h i s is derived as follows. Let the low resolution images have a resolution of K x K, and the high resolution scene a resolution oi N x N ( M = N ) . There are then K x K equations from the  35 low resolution frames and N x N unknowns from the individual pixels (in the high resolution scene). T h i s means that the total number of equations that can be formed is K x K x L. In order to satisfy the requirement that there is the same number of equations as unknowns, it is necessary that N = \f(L)  X K. In an ideal world if  the pixels were ideal samplers and there is no low-pass filtering by the optics then the m a x i m u m theoretical resolution improvement is the square root of the number of distinct sub-pixel frames[TOS92]. The above argument may lead some to believe that an infinite resolution improvement is possible. Unfortunately, this is not the case because of the zero-crossings in the sensor frequency response. In other words, real pixels have a non-zero point spread function (see F i g . 3.2). T h e shifted images must be separated by at least the w i d t h of the P S F i n the spatial domain or there w i l l be overlap. Some highfrequency information w i l l inevitably be lost as a result of the smoothing effect of the sensor response. Noise, as usual, agravates the problem.  Unknown Image Displacements W h e n the shifts are unknown, there is a registration problem that must be solved before a high-resolution image can be reconstructed.  T h e formation of the low-  resolution images through the system is considered as before. T h e degradation is usually assumed to be a low-pass blurring or averaging function. Alternatively, the images are modelled by a Gauss-Markov covariance model. If the original image were known, it would be possible to use matched filtering to determine the best correlation for a given shift. In fact, even though the original scene is not k n o w n , matching and correlation can still be used. T h e high-resolution image is estimated and the shifts varied to produce the best possible reconstruction.  36 T h e original image, x(j,k),  is not known w i t h certainty, however, so several itera-  tions must be performed w i t h different estimates based on the samples, y{. M o r t and Srinath suggested a m a x i m u m likelihood image registration algorithm[MS88] that requires no a priori knowledge of  3.1.6  k).  Restoration, Inverse Filtering  Once the low-resolution frames have been sampled and their relative shift determined, it is necessary to try to remove some of the noise through inverse filtering and restoration techniques. T h e problem w i t h most inverse filters is that they are unstable and not physically realizable. Pseudoinverse filters attempt to solve this problem by setting the frequency response to zero at ill-posed points. T h e pseudoinverse filter, however, remains highly sensitive to noise. T h e preferred technique is Wiener filtering because it removes the noise and some of the blur caused by the low-pass filtering. W h e n there is no registration problem, the restoration procedure is straight forward and lends itself well to conventional inverse filtering techniques. If there is a registration problem, it is preferable to combine the registration and inverse  filtering  techniques using an E x p e c t a t i o n - M a x i m i z a t i o n ( E M ) algorithm[TG94, T K 9 4 ] . In the E M algorithm, the degradation process is modelled as i n E q . 3.3 and an image covariance model[Jai88]. A n i n i t i a l guess is taken for the shift of each low-resolution frame.  E a c h shift is then incorporated into the covariance model. T h e resulting  modified probability density function of the observed image is then m a x i m i z e d as the shifts are iteratively estimated. Another similar iterative reconstruction technique for nonuniformly spaced samples is proposed by Sauer and Allebach[SA87].  37  3.1.7  High Resolution Reconstruction  After the o p t i m a l shift estimation (if unknown) and restoration have been performed, the final step is to piece the images together.  Ideally, a global procedure  w i l l be developed to perform a l l the above taking the interdependences into account. If the shifts of the low-resolution images and a priori knowledge of the final image are known, a recursive approach combining restoration and reconstruction may be the best solution [ K B V90]. Otherwise one procedure is to interpolate based on the shifts and amount of noise i n the images. Interpolation can be done using any of the available techniques. Linear interpolation may be the best option, but the choice ultimately depends on the relative shift of each frame. Other interpolation methods include such novel techniques as fractals and more conventional ones involving polynomials. T h e reconstruction can be seen as a problem of solving a set of simultaneous equations formulated by E q . 3.3 for each frame. T h e equations are linearly independent if the displacement between frames is at subpixel amounts.  Since the equations  are coupled together w i t h each other, the solution can be simplified by either tomographic backprojection[IP90] or projection onto convex sets[S089].  Additional  constraints (such as finite enery and l i m i t e d support) have been added to the above methods to account for observation noise[TOS92].  38  Figure 3.4: Original 256x256x8bit Lena Test Image  3.2  Simulation  3.2.1  Reconstruction with Known Shifts  In order to demonstrate how the resolution recovery procedure is achieved, a simulation was performed. The Lena test image ( F i g . 3.4) w i t h 256x256 pixels and 8 bits per pixel was used. T h e Lena image is a standard reference i n image processing. Low resolution samples were taken from the picture. These samples were then be manipulated i n an attempt to recreate the high-resolution image. T h e Lena test image was originally taken w i t h a still camera onto conventional film. T h e image has since been digitized and standardized for use i n imaging processing tests. T h e above test image ( F i g . 3.4) is the reference image, i. It is impossible to recover  39 this image perfectly because of the quantization effects and added noise. We consider the model of the sensor shown i n F i g . 3.2 where there is space between the pixels where no imaging occurs. A model of the simulation process is shown i n F i g . 3.5. Image A is the original image (i), B is the test image after low-pass filtering (yi), and C is the ensemble of low-resolution images shifted w i t h respect to each other. Filter H  2  (G\),  (G ), 2  performs the low pass filtering to obtain a lower resolution image and (G3), and (G4) perform the operations to get the low resolution images.  To obtain the low-resolution images, a simple moving average ( M A ) filter (see  H  2  below) is applied to the known high resolution image. T h e filter is essentially a low-pass filter that blurs the image onto larger pixels. T h e M A filter averages the adjacent pixels i n a square neighbourhood around each of the high-resolution pixels in question. One such filter is as follows: ' l  1  1  ^  1 1 1 1  Hi = — 16  (3.5)  1 1 1 1 1 1 1 1  For our simulation, however, a weighted moving average, W M A , filter (#2)  gives  the closest neighbour pixels more significance. T h i s more accurately reflects the situation on a sensor array. T h e individual sensors or pixels don't have a flat point spread function i n the space domain but rather one that is weighted about its center[Gro86]. T h e W M A filter is as follows:  Ho =  / 1  2  2  1 ^  1  2  4  4  2  36  2  4  4  2  2  2  VI  (3.6) I)  i(l,l)  id,3) ! id,4)  ! id,2) j  L.  - — - 4 — - 4 -  O r i g i n a l T e s t Image  id,3)!  i ( l , l ) • i(l,2) | -  —  —  i(l,l) ]  •jid,2) j  id,3)]  !  J  |  id,2) j  id,3)|  i  !  i  J id,l)  id,4)  __L__ id,4)  id,4)  H2  LPF  yid.l)  yid,2)  yld.l)  y2d,l)  yid,4) yld,2)  Original yi  I  y2d,2)  yi(2,l  yi(2,2)]  y3d,i)j  y4d,i  yi(2,3 yi(2,4) y3 d,2)| y4d,2)  yi(3,i  yi(3,2J yi(3,3)  yi(2,iS  y2(2,i  yil2,2i;  yi(4,l||  yi(4,2)|  yi(4,3)| yi(4,4l  y3(2,l)l  y4(2,l  y3(2,2)l y4(2,2)  L.  (Gil  yl  (G3  yi(3,4) y2(2,2)  (G4)  1  Low R e s o l u t i o n Images  y3  Figure 3.5: Simulation M o d e l  (G2)  -II  T h e L low resolution images are formed by subsampling the multiplication of the above filter w i t h the high-resolution image. If the high resolution image is N x N, the low-resolution image is K x K, where K =  In our simulation, TV = 256  and L = 16 such that K = 64 and the low-resolution images have pixels \J~(L) = 4 times as large. One of the low-resolution frames is shown i n Figure 3.6.  10  20  30  40  50  Figure 3.6: Low Resolution 64x64x8bit Lena Image Frame (obtained by applying H2 to F i g . 1)  In order to complete the simulated process of how the sensor array misses certain parts of the image (see F i g . 3.2) additional subsampling operations (Gi) (G4)  ,(G ) ,and 3  are applied to get the final low resolution images. These low-res images are  considered the highest resolution images that our imaging system can achieve on its own. F r o m here we work backwards to recover the resolution.  42  50 Correlation with original:0.958 Mean Square Error: 3505.979  100  150  200  250  Figure 3.7: H i g h Resolution Reconstructed Image (no deblurring applied)  W i t h the low-resolution images created, the high resolution image, i , ( A i n F i g . 3.5) can be artificially "reconstructed" by merging the low resolution images (yl,y2,j/3,y4 in F i g . 3.5). Interpolation is used to improve the results as described earlier[Gro86]. Figure 3.7 shows the reconstructed image using a bi-linear interpolation method after 2 dimensional filtering of the image with H  2  and meshing the images to a  higher grid. A s expected the image appears blurred because i n essence it is just the low-pass version of the reference image i. T h e example we have given is trivial but demonstrates how some high frequency content is lost and just how dramatic the improvement i n resolution can be from F i g . 3.6 to F i g . 3.7. A conventional deblurring restoration filter (such as a Wiener filter) could be applied to the image. A n alternative method for creating low-resolution shifted frames is suggested in  43  [ K B V 9 0 ] . It involves interpolating the original image by expanding it w i t h a zeropadded D F T . T h e higher resolution image can then be sub-sampled and noise added to get the shifted low-resolution frames. T h e advantage of this technique is that an arbitrarily large number of interframe samples can be obtained by increasing the number of points i n the D F T . Since the low-resolution blurring function, i/2, is a low-pass filter, some of the high frequency information is lost. This means that the reconstruction problem cannot be uniquely determined and m i n i m i z a t i o n of error techniques converge to several solutions. O u r M A filters, H\ and i f , have a non-uniqueness problem because they 2  are singular matrices. N o inverse filter exists (because of the singularity), a fact that mandates the use m u l t i p l e images shifted from each other to recover the resolution. T h e above simulation was then repeated but w i t h white Gaussian noise added to each low-resolution image. T h e noise was then removed using an appropriate Wiener adaptive filter and the high-resolution image reconstructed by interpolation. T h e resulting image is shown i n Figure 3.8. T h e high-resolution information is visible but the image suffers from greater Gaussian blur. T h e result is worse because of the added noise.  Registration Problem In our simulation above, we directly controlled what the sub-pixel shifts were. Suppose, however, that the shifts between frames were not known. W i t h knowledge or an estimate of the ideal image, a correlation analysis can be performed to find the most likely shift. T w o of the low resolution images are expanded by zero padding between samples to the size of the high-resolution original image. T h e two-dimensional correlation  44  Figure 3.8: Wiener Filtered 256x256x8bit Lena Test Image (from Noisy Frames)  45  x10  7  Figure 3.9: Image Correlation between Frames  of each image w i t h the original is then performed and plotted. T h e shift is found from the m a x i m u m of the correlation. T h e results obtained were consistent w i t h the actual shifts given to the two frames.  A plot of the correlation between the  tenth low-resolution zero-padded image (shifted 2 pixels in j direction, 1 pixel i n k direction) and the original image is shown in Figure 3.9. T h e m a x i m u m of this image was found at (254,255), shifted (-2,-1) from the reference frame at (256,256), which agrees w i t h the shift initially given to it. This simple example shows one way how image registration can be achieved.  46  3.3  Final Remarks on Super-resolution Problem  W h e n there are multiple low-resolution images at subpixel displacements it becomes possible to upsample and recover higher resolution information. It is apparent that the task becomes much easier w i t h more information about the imaging process and the ideal image. It is also highly desirable to know the exact shift of each frame and make the shifts at sub-pixel levels. In aircraft radar and other difficult to control imaging systems, this is not always possible. If however, a device existed to control the imaging shifts exactly i n a controlled manner, high-resolution reconstruction could be achieved without much effort. T h e whole process is optimized by combining the restoration and interpolation steps when there is no registration problem. It is clear from the procedure that the algorithm works best when there is m u c h blank space i n between pixel P S F s i n the sensor array. T h i s does not mean that the pixels on the C C D should be spaced further apart (because this would destroy the response of the C C D on its own), but that the individual detectors (pixels) should have a narrow P S F i n the spatial domain. Conventional high-pass filtering or deblurring techniques should be applied to the final blurred high-resolution reconstruction.  T h e resulting image resolution can  thus surpass that possible w i t h the best available sensors and optics.  Further,  cheaper optics and sensors can be used to produce higher resolution images by taking multiple frames. T h e m a i n disadvantage is the longer processing time.  Chapter 4 A Sub-Pixel Resolution System Achieving sub-pixel displacement can be done i n one of two ways. T h e first method is to optically shift the image using a movable refracting element as part of the optics.  This can be achieved simply using a prism, an etalon, or even a single  refracting plate. T h e second method involves moving the sensor itself. T h e C C D could be mounted on a plate, which in t u r n could be displaced w i t h actuators. T h e decision of which method to use is based on how easy it is to construct the actuators and how easy it is to set up the position sensing devices.  4.1  Description of V A P  A method of obtaining higher resolution images from video frames is outlined below. T h e method involves a C A N O N ES-750 camera w i t h optical image stabilization. T h e image stabilization is done by driving a variable angle p r i s m based on a vibration input and the current p r i s m angle. T h e p r i s m itself is used to deflect the image on the C C D surface. O u r goal was to use the vari-angle p r i s m ( V A P ) to  47  Figure 4.1: Complete V A P Assembly (US 5481394)  49  control the separation between several images and obtain a higher resolution from this. T h e V A P was chosen (instead of an etalon or moving the C C D ) to achieve the resolution shifts because it was a readily available device w i t h pre-made actuators and position sensing.  T h e goal was to expand the functional use of inexpensive  camcorders equipped w i t h means for image stabilization without going through the laborious process of building a system from the ground up. T h e mechanical structure that we use to deflect the images at subpixel amounts is the Variable Angle P r i s m shown i n F i g . 4.1. T h e important components summarized below:  31 Front V A P Surface 37 O-ring 33 Back V A P Surface 46a  Permanent Magnets  47a,48a Magnet Casing 45a  Voice Coils  43a,b P i t c h A x i s , Y a w A x i s 41a  Infrared Diode ( I R E D )  52a  Position Sensing Device ( P S D )  39a  W i n d o w Slit  38ab Front and Back Plates (for moving the vap)  are  50  Figure 4.2: Front and Rear Plates of V A P (US pat. 5481394)  Items 42ct6,49,50 are used to hold the V A P i n a static position when the locking mechanism is on. Item 40a is just an arm attached to the front plate. Please see U S patent 5481394 and U S Patent 5623305 which describe the V A P as used as a ' C a m e r a Shake Correcting Apparatus Having D a m p i n g Coils for Y a w and P i t c h ' . T h e V a r i - A n g l e P r i s m is formed from two glass surfaces separated and filled by a rubber silicone liquid w i t h index of refraction, n.  It is essentially a prism.  The  Front plate moves about a horizontal axis such that the image is shifted vertically on the C C D . T h e rear glass surface is confined to movements about a vertical axis, which allows for horizontal image displacement. T h e pitch angle (6 ) of the front P  surface (31) can be changed, while the rear surface (33) controls the yaw (6 ) y  as  shown i n F i g . 4.2. T h e prism has the function of bending light rays. T h e light is refracted through the  Figure 4.3: Large P r i s m  Figure 4.4: One Dimensional Structural V i e w of V A P  52  prism to the other side. T h e following equations can be derived w i t h reference to F i g . 4.3 using Snell's law of refraction and simple geometry:  sincci = n sin sma  = nsina  2  (4-1) (4-2)  2  a i + a' = a  (4.3)  2  e= a + a a  2  (4.4)  - a  where e is the angle between the incident and outgoing light rays, n is the index of refraction and a is the prism apex angle. W h e n the variable apex angle is small as i n F i g . 4.4, the equations above can be simplified to the equation: e = (n - l)cr  (4.5)  T h e deflection of an incoming ray is related to the index of refraction, n , of the prism and the prism apex angle. Equation 4.5 shows that the position of an image on the final C C D screen can be moved about w i t h movement of the prism. T h e V A P itself is physically circular and the geometry becomes more complicated when both sides of the V A P (31 and 33 i n F i g . 4.2) are moved at the same time as shown i n Fig.4.5. Suppose that the rear plate is rotated at 9' and the front plate p  at 0' to produce a movement (f>. T h e inclination w i t h respect to the angle <j> for the y  rear plate is given by 9[. F r o m the geometry, the following can be written for 9[:  tan 9[ = sin <f> tan 9  P  (4-6)  Similarly, the following can be written for the inclination w i t h respect to <f> for the front plate: tan 9' = cos <j) tan 9 2  y  (4-7)  FIG. 3(A) PRIOR ART  Figure 4.5: Geometry of V A P movement (US pat. 5481394)  54  Figure 4.6: Restriction of V A P movement (US pat. 5481394) T h e p r i s m apex angle, a, is the sum of the two above angles: (4.8)  a = 6[ + 0'  2  Therefore a is larger than both 8 and 9 . If we now define 9 y  P  y m a x  and 6  p m a x  as  the m a x i m u m angles that the V A P can be rotated i n the respective yaw and pitch directions, then it is clear that for oblique movements, the total p r i s m angle a can exceed the m a x i m u m allowable p r i s m angle a ax, m  such that the image overshoots  the C C D . T h e solution is to restrict the movements i n each direction as shown i n F i g . 4.6 and the following equation: O + 0 < 2  2  p  y  C  (4-9)  T h e V A P has 2 degrees of freedom allowing movement i n two dimensions, which is all the movement w i t h physical meaning for an image. A l l that needs to be achieved now is control of the V A P . T h i s is described i n the next section.  55  44  41b  Figure 4.7: V A P actuator unit (US 5481394) A c t u a t i o n of the p r i s m surfaces is done through magnetic voice-coils. It is essentially a linear D C motor system. T h e voice-coils are driven by an external voltage source and move w i t h i n the permanent magnet stators. E a c h actuator has a characteristic winding resistance [ V / A ] , R. T h e actuator coils are wound w i t h pancake wire. F i g . 4.7 and F i g . 4.8 show the actuator unit (with attached photosensors  described  later). N u m b e r 42 on F i g . 4.7 is the actuator winding coil. N u m e r a l 43 on the same figure is an identical coil that w i l l be discussed later. T h e labels are summarized below:  6 or 15 C o i l Casing (see F i g . 4.14) 7ab 7ef  A c t u a t o r Mounts Hole for Stem 53 or 54  26 or 32 C o i l Casing (see F i g . 4.14)  56  4A  Alfc>  Alto 55  5 3 o r 54J  26 o r3 2 6 o r15 AO  Figure 4.8: Side V i e w of V A P A c t u a t o r (US 5623305)  40  Stator  41 A c t u a t o r Casing 41b  Slit W i n d o w  44 Infrared E m i t t i n g Diode ( I R E D ) 45 Position Sensing Device ( P S D ) 53 or 54 Stem Connecting A c t u a t o r to V A P Frame 55 V a p Frame  For position sensing, each side has an angle sensor, formed by an infrared photodiode ( I R E D ) and photo-sensor ( P S D ) . F i g . 4.7 shows the position sensing i n detail (44,41b,45). T h e I R E D (44) and P S D (45) combination is fixed i n place while a small window slit (41b) on the chassis moves w i t h the V A P outer surfaces.  This  57  moves the dot of light on the photo-sensor allowing the angle of the pitch and yaw to be measured. We used the instrumentation amplifier circuit shown i n Fig.4.9 to amplify the P S D output. I N A 118 is an instrumentation amplifier (amplifying the small differential signal from the V A P P S D ) set to a gain of 100 through the 560 o h m resistor. O P A 2 7 is just a buffer amplifier. T h e circuit attached to the non-inverting terminal of O P A 2 7 is a t r i m m i n g circuit (made more stable by the current references ( R E F 2 0 0 ) ) used to tune the instrumentation amp INA118 through the buffer amp ( O P A 2 7 ) . It was important to use shielded twisted pair wire to prevent noise from being introduced. Va is a symbol notifying the voltage feed to the control circuit described later i n F i g . 4.15.  4.2  Applied Super-Resolution  Achieving super-resolution i n practice is a non-trivial problem. In a l l real world cases, the actual representation of an image is unknown. For moving camera systems, such as aircraft surveillance, controlling the shift between image frames is next to impossible. It is therefore necessary to consider the full registration problem. O u r goal is to show a system that can take several images that are shifted by a known and repeatable amount and use these images to recover information that was lost i n the optical process. In practice, determining the sub-pixel shift is a difficult problem. A test target must be carefully chosen to calibrate the system properly. In addition, an appropriate method for locating the target must be outlined. W i t h the above accomplished, a mapping of the system position to relative shift can be done.  58  (instrumentation  amp)  (trimming  network)  -15V  Figure 4.9: Our Front E n d Differential Position Amplifier  Figure 4.10: V A P Deflection Angle versus Drive Voltage (US 5623305)  60  4.2.1  Centroid Calibration  In order to calibrate the system, a good test target is required that can be distinguished from the environment. Issues such as external lighting, camera automatic gain control and external vibrations make the consistent determination of the centroid a problem. In order to m i n i m i z e the variations i n the overhead light, it was found that a white target on black paper worked best. In addition, the camera automatic gain function adjusted the contrast less w i t h this configuration than for a black target on white paper. T h e target itself was a 1cm by 1cm square centered i n the camera field of view. T h e camera zoom was set to the widest angle. Using different zooms causes changes i n sensitivity to movement of the V A P . In fully zoomed mode, it is clear that small movements i n the V A P creates larger displacements i n the centroid across pixels because the target takes up more relative area on the sensor, which i n t u r n translates to a larger area on the screen.  Because of the significant changes i n zoom and  V A P temperature during different test conditions, it was found that calibration was necessary before every test. T w o approaches were used to find the centroid. T h e first was an iterative approach to examine the outer neighbourhood of a suspected spot. T h e second was to find the centroid of a specified small area of pixels. T h e difficulty w i t h the first approach lies i n the threshold value for a detected spot. O n the border, the contrast effect of the camera automatic gain control causes the pixels to be lighter t h a n the background. Using a fixed threshold i n a iterative algorithm throws out information on the border. T h e automatic gain control of the camera causes unwanted effects no matter which algorithm is used, but the local area is less sensitive to it. T h e best solution  61 is to use a larger dot to make the boundary have less importance. Lighting problems also hamper the consistent determination of the centroid. A s the brightness and direction of the overhead light shifts, the centroid can move about. For the sub-pixel work we are concerned w i t h , this can cause problems. Because of the above problems of lighting and camera automatic gain control ( A G C ) as well as various vibration noise, a statistical approach was at first used to find the centroid. T h e centroid was found for several hundred image frames. A sinewave input was initially used to drive the V A P . V  in  = Asinut  (4.10)  Vout = kV  in  Vrms So experimentally, y  Tms  =  =  kA.  rms  (4.11) (4.12)  jy—, where N is the number of points taken. Obtaining  data this way gets r i d of the sensitivity to jiggle and averages out the noise effects. T h e centroid was actually located using a threshold technique. T h e algorithm is as follows:  • Check if current pixel is w i t h i n the threshold for target • If not, scan through image until one is found • Recursively check the neighbours of each pixel w i t h i n the threshold • W h e n a hit is found (as part of the intended target), m u l t i p l y X coordinate index by the pixel value • A d d the above value to a running sum for all hits ( S U M X X B )  62  2Kt/R  O  l/(Js 2+Ds) A  Ky  Figure 4.11: Open Loop representation of V A P • Repeat above two steps for Y coordinates ( S U M Y X B ) • A d d current pixel darkness value to running sum ( S U M B ) • Repeat 5 previous steps u n t i l no more hits are found • Calculate centroid location by division ( S U M X X B / S U M B , S U M Y X B / S U M B )  To eliminate false positives, the user was allowed to select from a number of possible hits. Once the desired centroid was selected, an image of the region i n question was displayed on the screen. This made sure the right coordinates were chosen for the test target. W i t h the test target located, the above threshold technique or a localized area technique was used to calculate the centroid after each movement of the prism.  4.3  Control of the V A P  The V A P can be r u n open-loop, but doing so results in^hysteresis i n the displacement/voltage curve. T h e hysteresis is due to the magnetic nature of the voice coils and the permanent magnets. F i g . 4.10 of U S patent 5623305 shows the hysteresis for the V A P . This same patent also shows how the V A P can be modeled i n terms of physical parameters.  The  63 assembly has a particular torque constant, K , t  and inertia J.  T h e silicone liquid  inside the V A P has a viscosity resistance D [ ^ ^ ^ ] and the V A P itself has a spring constant, K . y  A block diagram for the V A P is shown i n F i g . 4.11.  F r o m this  diagram, the following equation can be written for the open loop gain:  where R is the winding resistance i n Ohms. This can be simplified to a generalized second-order equation:  G  M  =  i  (  4  -  1  4  )  where n is a normalization and £ is an attenuation coefficient. O n l y the form of the equation above is important; we do not need an expression for u  n  or £ and the  patent does not provide one either. A simulation yields Bode diagrams ( F i g . 4.12) showing phase and gain versus frequency. T h e characteristic curves for the V A P reveal a pole at 100Hz. In addition, running the V A P open-loop prevents control over small angles due to the V A P non-linearities and the hysteresis curve. T h e liquid inside the V A P changes mechanical (viscosity) and optical properties (index of refraction) significantly w i t h s m a l l changes i n temperature. T h e strong temperature dependence of the device as well as the need to have control over small angles requires a more elaborate control system. In order to improve results, the position of the P S D is fed back. T h i s alleviates the problem of the V A P non-linearity and allows smooth control of the variable angle prism. T h e phase margin is increased, which reduces the effect of the V A P mechanical properties. A block diagram showing the V A P control is shown i n F i g . 4.13 w i t h G(s) equal to the open-loop gain above. T h e actual circuit we designed and used to feedback the position is shown i n Figure 4.15. Symbol Va is the voltage  \  64  Figure 4.12: V A P Bode Plots (US 5623305)  65  J  G(s)  K  Figure 4.13: Closed Loop representation of V A P from the front end instrumentation amp ( F i g . 4.9). T h e first operational amplifier T L 0 7 1 is a high D C gain amplifier tuned through the lOOnF capacitor and 3 9 K o h m resistor to roll off at a frequency of 20Hz. W i t h o u t this rolloff, the parasitic pole at 100Hz causes the V A P to oscillate violently. T h e feedback loop is composed of the amplifier L F 3 5 6 and the current mirror of 2N3904 and 2N3906. T h e m i r r o r is there to get a higher current drive. In addition, there is a couple of 2.2Volt Zener diodes tied as shown i n front of the current mirror to prevent the V A P actuator from being overdriven through V . out  The patent mentioned earlier is concerned w i t h correction for image shake.  It  describes control based on a detected angular velocity from an on-camera gyroscope. A l l the signals are processed digitally through a microcontroller. In addition, the actuators have a second coil ( F i g . 4.14-42,43) on them (physically separated from the first coil by a barrier) to pick up the velocity of the actuator ( F i g . 4.14-56). T h e second coils are called damping or d u m p coils and are not driven. T h e i r movement through the magnetic field inside the stator induces a voltage. T h e A C stability is increased by using the second damping coil, which introduces an derivative feedback t e r m to increase the phase margin. In D C motor theory, the technique is known as velocity feedback. As mentioned earlier, the characteristic pole is at 100Hz w i t h only the position being  66  Figure 4.14: D u a l C o i l Wrapping of V A P A c t u a t o r (US 5623305) fed back proportionally. A d d i n g the derivative feedback helps push the pole to a larger frequency, thus reducing the sensitivity of the system to external vibrations. For our purposes, however, the proportional feedback is sufficent because we are not yet t r y i n g to shift the images transparently i n real time and are only working i n one dimension. In order to stabilize both the vertical and horizontal directions of the V A P , velocity feedback must be used i n the control.  4.4 4.4.1  Experimental Results using ES-750 Initial Tests  In essence, there are two resolution problems that we are concerned w i t h . T h e first is w i t h the C C D surface itself. A n image at the C C D surface is sampled by the i n d i v i d u a l pixels, which is then resampled by the frame grabber at the computer.  67  4.7M A/VV-  10K AA/V  (from  15V  F r o n t end) ^°  2N3904 100K AA/V  r-  4  12:  v  Out -o  -15V 10K  r  -15V  2.2V  2N3906  2.2V  (user s u p p l i e d voltage)  Figure 4.15: O u r Control C i r c u i t The digitized frame suffers loss of detail from quantization effects. T h e second loss of resolution occurs at the screen. T h e screen has its own pixels which sub-sample the digitized image when it is displayed. T h e issue we are concerned w i t h is whether a subpixel movement on the C C D is equal to a subpixel movement i n the viewed image on the screen. Luckily, the mapping from the camera image surface to the screen is one to one. Therefore, by studying the subpixel movement on the C R T surface, it is possible to recover the resolution caused by the smoothing effect of the C C D , as discussed earlier.  4.4.2  Open-Loop Results  The first tests on the V A P involved driving it open-loop w i t h a sine wave input and using E q u a t i o n 4.12 to find the displacements on the C R T . F i g . 4.16 shows the horizontal mean rms displacement for sinewave inputs of several different frequencies. F i g . 4.17 shows the same information but w i t h the control system attached to the vertical (pitch) actuators of the V A P . T h e results i n both cases are linear up to a frequency of 25Hz. A t 3QHz, the plot is a flat line indicating no response to  68 Vertical Displacement with Sinewave Input  o X  + » •  5Hz 10Hz 15Hz 20Hz 25Hz 30Hz  150  200  Vrms(mV)  Figure 4.16: Vertical Displacement w i t h Sinewave Input the signal. T h i s shows that the frequency response deteriorates somewhere around 25Hz. Next, the frequency response of the V A P was measured to find where exactly the cut-off frequency is. T h i s involved using a 0.5 Volt sinewave w i t h an arbitrarily set frequency. T h e frequency was changed and the response measured. F i g . 4.18 is a plot of the results. T h e same experiment was repeated for the vertical V A P plate as shown i n F i g . 4.19. T h e results for both sides coincided. T h e bandwidth of the device is situated around 20Hz. A step function frequency response was also performed. T h e results appear i n F i g . 4.20. T h e square wave response is useful because i t shows what would happen i f the voltage to the V A P were increased digitally at specific increments. T h e bandwidth is reduced to approximately \bHz when a squarewave is used to move the Variable Angle P r i s m .  69  Horizontal Displacement with Sinewave Input 5Hz 10Hz 15Hz 20Hz 25Hz 30Hz  0 X  + » •  1.6  1.4  »  1.2  1  1  a  |o.8 0.6  +  I  I  r»  ,  •$  s  •  •  •  a  •  w  50  100  150  200 250 Vrms(mV)  300  350  400  Figure 4.17: Horizontal Displacement w i t h Sinewave Input  VAP Horizontal Frequency Response(V=0.5)  <o 2 X  ft 1.5  CO c  II  ••••• • Seriesi  1  4)  « 0.5  ••••••••••••••  ft w  • 0  20  40  60  Frequency(Hz) Figure 4.18: Horizontal Direction V A P Frequency Response (0.5 V Sinewave)  70  Normalized VAP Vertical Frequency Response 1.2  w E  1  >  0.8  3) « 0.6 g a 0.4 a 0.2 E L.  0  Z  0 -0.2  20  40  60  80  Frequency(Hz)  Figure 4.19: Vertical Direction V A P Frequency Response (0.5 V Sinewave)  VAP horizontal Frequency Response(.2Vpeak square wave) 3.5 ^  X  3  •••••  a 2.5 c  i s a  2  • Series 1  1.5 1  •  .2 0.5 • 0 0  10  20  30  •  40  Frequency(Hz)  Figure 4.20: Square Wave Response  71 Displacement with Compensating Op-Amp  0.6 0.55  19.3Hz 20Hz 23.1Hz 24.75Hz  •-0-••••»•••  0.5 0.45  $  0  ,  - of  4  | 0.35 .- »' <v o •iS o 0.3 0.25 0.2 0.15 [  0.1  I  10  20  30 Frequency(Hz)  40  50  60  Figure 4.21: Displacement w i t h Compensating O p A m p W i t h the frequency response determined, we set out to attempt pushing out the response using a compensator circuit w i t h a variable frequency gain inverting opamp to drive the circuit and attempt to increase the bandwidth.  T h e feedback  impedance was changed to adjust the location of the pole. T h e frequency response was then plotted w i t h each different characteristic frequency, / , i n F i g . 4.21. T h e c  driving op-amp h a d a gain of  w i t h Z ; the input resistor and Zf the parallel  combination of a resistor and capacitor to get the desired frequency f . It is apparent c  that the response deteriorates above 20Hz. T h e pole at 20Hz was not significantly reduced w i t h the op-amp. T h i s indicates that there are higher order effects involved w i t h the V A P . T h e op-amp was then r u n at D C . In one direction only, the response i n F i g . 4.22 was found. T h e plot is linear i n the middle section, but falls off at the edges as  72  Mean Position versus Applied Voltage • — •  oo 7  jj. 1  on o z . 7/  •  o-i 7 j 1. /  Tt 7 , OUV  ^  *  j.?0 7  •  ?° 7/ ZO.  * *  -600  97 7  1—m-  — • — -400  -200  0  1  1  200  400  600  Voltage(mV)  Figure 4.22: V A P run open loop i n one direction expected considering the l i m i t i n g hysteresis of the actuators. Unfortunately, the results i n F i g . 4.22 were only attainable under strict conditions and were not repeatable between tests. W h e n the V A P was moved back and forth, a hysteresis pattern was immediately apparent. F i g . 4.23 shows the situation for some representative movements of the V A P . T h e hysteresis is similar to F i g . 4.10 except the scale is different. W e are using a different circuit to drive the V A P than that described i n the patent so a direct comparison is not possible. T h e same can be said about the frequency response. T h e 100Hz  pole described i n the patent does  exist as is clearly apparent when the control falls out of loop and starts oscillating, but it is not the same as the bandwidth that we determined the V A P to have using an external driving circuit. In addition, the instability of the device was also a concern. A program was written to continuously sample the V A P over a period of 4 hours. T h e input was a constant magnitude and frequency square wave and the output was the displacement of the test target at each point as shown i n F i g . 4.24. A s mentioned earlier, the drift is  73  VAP vertical displacement at DC  VAP Horizontal DC Displacement  Voltage(mV)  (a) T r i a l 1  270  Voltage  (b) T r i a l 2  Figure 4.23: Hysteresis of V A P due to several factors. T h e extreme points are most likely due to random vibration of the bench where the camera is mounted. In addition, a long t e r m change i n the displacement is evident and the separation between the high and low points becomes narrower. T h e narrowing of the displacement can be explained by a decrease i n the index of refraction, n , caused by heating and subsequent expansion of the V A P fluid. T h e skew is explained by the V A P settling into an equilibrium state.  Feedback The above graphs and plots indicate a clear need for feedback. T h e feedback network i n Chapter 3 (Figures 4.9 and 4.15) was constructed and attached to the V A P position sensors and actuators directly. Using feedback improves the situation considerably. T h e results show that subpixel  74  Time Analysis ~  5000  Pixel  Figure 4.24: V A P Drift over T i m e movements can be achieved using the Variable Angle P r i s m . Initially, the following results i n Figs.  4.25a and 4.25b.  There is an obvious knee to the graphs.  The  knee is not due to hysteresis but rather to a spring loading effect i n the actuator assembly. Over a smaller range w i t h a single pixel, linear results are obtained as shown i n F i g . 4.25c. W i t h the above linear V A P movement over a pixel range, it becomes easy to sample images displaced at subpixel amounts to each other. T h e procedure must be done quickly, however, because of the inevitable heating of the unit and subsequent drift. T h e above mentioned Figs. movement is reversed.  4.25c and 4.25d do show some separation when the  A n explanation would be the slippage of the window as  discussed i n the previous Chapter.  For this reason, it is recommended that the  samples be taken while the movement is i n the same direction. It should be noted that the feedback is stable for frequencies up to 100Hz. A l s o , a  75  52.21  VAP Disptacement with PSD Feedback 1  1  1  Applied Voltage(mV)  (c)  1  V  1  1  M  |  1  A  P  Displacement with PSD Feedback 1  1  Applied Voltage(mV)  (d)  Figure 4.25: V A P Displacement versus A p p l i e d Voltage  76  Element  GrouV 0  1  2  3  4  1  10.0  22.0  40.0  80.0  160.0  2  11.2  22.4  44.9  89.8  n/a  3  12.6  25.2  50.4  101.0  n/a  4  14.1  28.3  56.6  113.0  n/a  5  15.9  31.7  63.5  127.0  n/a  6  17.8  35.6  71.3  143.0  n/a  :  Table 4.1: U S A F 1951 Resolution Test Chart (lppm) large magnitude shake can bring the control out of the stable range. In addition, if the bench suffers large vibrations then the V A P w i l l oscillate at the 100Hz parasitic frequency. Such behavior was observed using an oscilloscope.  4.4.3  Sampling at Subpixel Intervals  In this section, actual images were taken w i t h the V A P calibrated using the above methods.  In order to step the V A P at equal intervals, a D A C was used to drive  the position. For each trial, the V A P is calibrated and the graph of centroid displacement versus applied voltage used to find how much voltage was needed to get fractional increments. A l l the tests were done w i t h a displacement of one quarter pixel i n one dimension (the vertical dimension). T h e m a x i m u m resolution improvement from the four images displaced at quarter pixel amounts is a factor of 2. The test image is the standard U S A F 1951 resolution chart (Table 4.1). Four images were taken displaced from each other by a quarter pixel each.  77  78  79 It is difficult to tell where the resolution cuts out on the above image. Zooming i n on the middle section shows where it deteriorates ( F i g . 4.26). E x a m i n a t i o n of the zoomed image shows that the last clearly distinguishable group is Group 1 Element 5 giving an uncorrected resolution of 31.7 l p p m . Group 1 is on the far right of the image. T h e high resolution reconstruction was done by meshing the image and fitting it to the higher resolution grid. T h i s was done using M a t l a b ( T m ) and the result is shown in Fig.4.27. Note that there are some jaggies present i n the horizontal direction. These are due to,the fact that the image was not perfectly aligned. T h e 2nd element of Group 2 can be distinguished i n this image putting the resolution at 44.9 l p p m . Unfortunately the numbers are not clear on the image.  Group 2 is the leftmost  group i n the center of the picture (to the right of the large Group 0). Element 2 is at the top of this group and element 1 of Group 2 is i n the b o t t o m left corner of the two center groups (note that Element 1 of Group 0 is i n the bottom left corner of the outer two groups (0 and 1)). T h e resulting resolution improvement is about 40 percent. T h e theory mentioned earlier predicts the m a x i m u m resolution recovery from the above method at double the resolution of the low-res pictures. T h e experimental results were lower than this predicted m a x i m u m value for two m a i n reasons. F i r s t , there was a delay of thirty minutes from when the V A P was calibrated to when the above reconstruction was done. T h i s allowed for some drift i n the V A P reference (from the P S D ) because of temperature changes i n the V A P fluid. Second, there is noise introduced through the image acquisition which corrupts the images and can not be removed through our sampling process. In addition, the actual theoretical l i m i t is measured by the P S F s of the actual C C D pixels. There was insufficient information to determine what the form and spacing of these P S F s was. T h e doubling of  80  resolution figure is just an upper bounds for four samples displaced at quarter-pixel amounts.  Computing Requirements W i t h four 480 by 640 pictures, the above reconstruction took 4294833 flops i n M a t l a b ( T m ) . T h e number of flops for reconstruction, using L images each of size m by n , is on the order of magnitude O(30ran + 2 * L * m * n).  T h i s maybe too  slow for implementation i n real time. For real time operation, a faster processor or multiple processors may be required. In addition, the picture size might have to be reduced.  4.5  Issues and Problems  T h e patent literature for Canon's proprietary V A P technology is geared towards image stabilization. They have constructed the device for rapid correction of high frequency camera shake. It incorporates a gyroscope and microprocessor for very fast correction. T h e microprocessor allows digital control and filters which makes the calibration of the system much easier. We effectively bypassed the microcontroller used by Canon. We do not need gyroscope inputs because it is assumed that the camera is held stable. T h i s cannot be the case, however, for a hand-held camcorder. In bypassing the microcontroller, we used analog circuitry for ease of i n i t i a l measurements. Unfortunately, the V A P was not designed for long term D C stability. In addition, the instrumentation amps used to pick up the signals are subject to drift. T h e drift can be m i n i m i z e d , but since we are concerned w i t h very small inter-pixel shifts, the measurements must  81 be done rapidly. In a final system, continuous calibration w i l l be required as well as digital control of the entire system. Examples of the drift were shown previously (Fig. 4.24). T h e control is ultimately l i m i t e d by the position sensing apparatus. Unfortunately, Canon does not publish the specifications for the infrared diodes and P S D sensor used i n either the patent literature or the service manual.  M o d e r n P S D s have  very good resolution and can track laser spots to an accuracy of lfim.  In this  device, however, the accuracy is clearly l i m i t e d by the mechanical window. T y p i c a l infrared diodes have a broad spectrum w i t h a viewing angle between 20 and 40 degrees.  W i t h o u t the mechanical window, the I R E D would spray itself onto the  P S D m a k i n g measurements difficult to obtain. T h e mechanical window, however is fairly large on the order of 1 — 2mm.  B o t h the precision and accuracy of the angle  sensor is therefore l i m i t e d p r i m a r i l y by the mechanical window ( N u m e r a l 41b, F i g . 4.7). Another problem w i t h the V A P assembly is the crosstalk from one side to another. W h e n the angle is changed i n the yaw direction, there is movement i n one of the pitch directions. T h e converse is also true. According to the diagram of the V A P assembly in F i g . 4.1, the back plate is not completely independent from the front. T h e axis of the opposite plate can rotate on movement of the other plate because nothing holds the plates static and the axes aren't perfectly centered. Therefore, the control must hold the one side when the opposite side is moved. In a microcontroller system as implemented for the actual V A P controller i n the camera, this could be directly implemented i n the microcontroller itself. Unfortunately, when we connected our controller to both sides at the same time, the V A P would i n i t i a l l y stay stable u n t i l one side was moved. Afterwards, it would oscillate uncontrollably at the 100Hz parasitic frequency described i n the patent literature. T h i s is due to the noise and  82 crosstalk reducing the phase margin to an unacceptable level. A microcontroller based control is recommended to control both sides concurrently. In a future incarnation of a V A P image displacement device for the purposes of sub-sampling to increase resolution, it is recommended that a new angle sensing device be constructed. Instead of an I R E D , a laser diode could be mounted on the chassis where the existing window slit is. Also, more elaborate digital control using a microprocessor is required to improve stability and to get the images before drift occurs.  Chapter 5 Conclusions This thesis has summarized the results of work i n image processing.  W e have  also given a simulation of how a higher resolution is achievable from several lower resolution frames. A device has been described that displaces images. T h i s device is called a VariableAngle P r i s m ( V A P ) . T h e displacement is linear w i t h the pixel range. We have also shown how to control the V A P to get images displaced at subpixels to each other.  We then used these images to get a higher resolution i n the one  dimension. T h e extension to 2 dimensions is nontrivial but possible. We have shown that the improvement i n resolution from the above method is about 40 percent.  This is less than the doubling of resolution that is theoretically pos-  sible using 4 images.  Further improvement would require digital control w i t h a  microcontroller to pick out the images rapidly and precisely.  83  84  5.1  Future Work for Combined System  Ideally the system would incorporate both the Vari-Angle P r i s m and the hemispheric camera. T h e loss of resolution caused by the sampling of the C C D could be compensated quite nicely by dithering the images w i t h the V A P . T h e m a i n resolution problem i n the imaging system is at the C C D . If conventional film were used, there would be much less resolution lost. Unfortunately, current C C D s have pixel sizes of 10-20 micrometers, much larger than the resolution achieved w i t h a photographic film. T h e advantage of using C C D s over film is the continuous sampling ability and easy conversion into a standardized digital form. Unfortunately, the V A P is currently too large i n its present form to fit under the primary mirror. It could be placed i n front of the mirror but this would complicate matters enormously. T h e demapping would have to take into account the fact that the rays pass through the V A P twice. A simulation would have to be done and a calibrated table look-up formed to determine the amount of shift. T h i s is not an efficient solution and the preferred method would be to have a miniature V A P or prism deflection system made to fit the hemispheric camera. After the V A P project was complete, we discovered that Sharp came up w i t h a device for deflecting images w i t h a single refracting element. T h i s device d i d not exist when the research for this thesis was performed.  T h e reader who wants to  do more work i n this area is strongly urged to check out their patent. It is titled 'Imaging Apparatus H a v i n g Improved Resolution due to C o n t r o l of the Inclination Angle of a Refracting Plate i n M o r e than One D i r e c t i o n ' (US patent 05637861). It maybe a better solution than the V A P because it is far more simple and much less sensitive to temperature. In addition, the mirror has too much material inside obstructing the placement of  85 the C C D camera. T h e preferred mirrors need to be built using a m o l d technique so that they are very t h i n and strong. T h e cost is higher than for the a l u m i n u m ground mirrors that we used, but the bulk manufacturing prospects are great and subsequent mirrors are very cheap once the m o l d is b u i l t . Another problem is that the particular camera that we used has a potentiometer on the surface that prevents proper mounting inside the p r i m a r y to coincide w i t h the focal length of the lens. In addition, the lens does not fit inside the mirror hole. A custom lens needs to be built to fit exactly inside the hole on the top of the p r i m a r y mirror or preferably the hole needs to be increased or decreased to fit a stock lens of choice. This involves choosing the right clear aperture of lens along w i t h a focal length that w i l l project the image to the C C D surface correctly. It is m y experience, however, that a standard C C D camera w i l l not work w i t h the apparatus.  ACCD  camera needs to be built from scratch using raw C C D sensor components. Such a task is not t r i v i a l . Ideally, the camera would be digital. O u r experiments w i t h the camcorder used a frame grabber that resampled the N T S C format. T h e camera outputs the signal i n analog form which is then chopped up by the A D C on the frame grabber.  Using  a digital camera would eliminate additional conversion loss and degradation by quantization noise.  ^  Another big problem is the b l i n d spot.  Right now, the view can only be called  substantially hemispheric. Perhaps i n a later incarnation, the b l i n d spot in the center of the scene could be reduced further by putting a lens system i n a hole w i t h a diameter the same size as the b l i n d spot i n the upper mirror (hb2 i n F i g . 2.6). This would be projected onto the unused centre spot that currently exists on the C C D because of the geometry. If the image from beyond the secondary mirror is l i m i t e d i n scope to the same size as the blindspot, then it w i l l not interfere w i t h the  86 hemispheric data. Such a modification would allow more of the actual hemispheric scene to be imaged. Future work w i t h the V A P involves digitizing the control and connecting both front and back sides of the device together to shift the images i n the horizontal and vertical directions and obtain 2 dimensional image resolution improvement. A microprocessor is required to coordinate the movements properly and assure loop stability. Finally, we have mentioned the possibility of incorporating the two technologies of omnidirectional vision and super-resolution into a single device. T h e prospects are very good as they complement each other nicely. M a n y more experiments need to be done, however, to develop a full working model that is commercially viable.  5.2  Summary of Results  In this work we have shown a system for hemispheric or omnidirectional vision. It is compact and has a very large field of view of 360 degrees i n one hemisphere. T h e geometry and issues concerning the camera have all been examined and a design proposed for m i n i m i z i n g some of the intrinsic optical problems. A resolution recovery method was also shown and built using a Variable Angle P r i s m to effect image shifts. T h e hemispheric vision system above was i n fact built and demonstrated to work. There are certain things that need to be done to improve the resolution of the images, however. These include, but are not l i m i t e d to, m a k i n g a custom d i g i t a l C C D camera w i t h a large number of pixels, rebuilding the mirrors w i t h an injection moulding process so that they are very t h i n (one thousandth of a inch thick is possible), building custom optical mounts to hold the mirrors i n place, and finally  87 choosing a new lens matched to the custom C C D and p r i m a r y mirror aperture. M u c h work needs to be done before the system can be marketed commercially. T h e market for such a device has not been determined, but it may grow exponentially as knowledge of omnidirectional sensors and their applications is determined. Also i n this thesis, a image displacement apparatus known as a Variable-Angle p r i s m was outlined. It displaces rays of light and can be used for sub-pixel movements. Using several images displaced from each other at sub-pixel amounts, a higher resolution can be obtained for the final image of the original scene.  T h e V A P was  used successfully to obtain resolution improvement i n one dimension. Subpixel displacements were also confirmed using the device and our custom control hardware. Super-resolution was demonstrated successfully.  Bibliography [Gro86]  D . Gross.  Super Resolution from Sub-pixel Shifted Pictures.  Master's  thesis, T e l - A v i v University, October 1986. [H 86] +  E . H a l l et a l . Omnidirectional V i e w i n g Using a F i s h E y e Lens. In Optics, Illumination, and Image Sensing for Machine Vision, volume V o l . 728, pages pp. 250-256, Chicago, 1986. S P I E .  [IP90]  M i c h a l Irani and Shmuel Peleg. Super Resolution F r o m Image Sequences. IEEE  Transactions on Acoustics, Speech, and Signal Processing, V o l .  3 8 ( l ) : p p . l l 5 - 1 2 0 , January 1990. [IP91]  M . Irani and S. Peleg.  Improving Resolution b y Image Registration.  CVGIP: Graphical Models and Image Processing, V o l . 53:pp.231-239, M a y 1991. [Jai88]  A . K . J a i n . Fundamentals of Digital Image Processing. Prentice H a l l , New York, 1988.  [JR84]  L . S . Joice and W . L . Root. Precision Bounds i n Superresolution Processing. Journal Optical Society of America, pages ppl49-168, Feb 1984.  88  89 [KA96]  A . K r i s h n a n and N . Ahuja. Panoramic Image A c q u i s i t i o n . Proceedings of IEEE Conference on Computer Vision and Image Acquisition, pages pp.379-384, June 1996.  [KBV90] S. P . K i m , N . K . Bose, and H . M . Valenzuela.  Recursive  tion of H i g h Resolution Image F r o m Noisy Undersampled IEEE  ReconstrucMultiframes.  Transactions on Acoustics, Speech, and Signal Processing, V o l .  38(6):ppl013-1027,  June 1990.  [Kor88]  Dietrich Korsch. Reflective Optics. Academic Press, I N C . , Boston, 1988.  [KS93]  S. P . K i m and W e n - Y u S u . Subpixel Accuracy Image Registration by Spectrum Cancellation. IEEE Procs., V o l . V ( 6 ) : p p l 5 3 - 1 5 6 , 1993.  [Luk66]  W . Lukosz. O p t i c a l Systems w i t h Resolving Power Exceeding the Classical L i m i t . Journal Optical Society of America, 56:pp. 1463-1472,  November  1966. [Luk67]  W . Lukosz. O p t i c a l Systems w i t h Resolving Power Exceeding the Classical L i m i t , II. Journal Optical Society of America, 57:pp. 932-941, J u l y 1967.  [MS88]  M . S . M o r t and M . D . Srinath. M a x i m u m L i k e l i h o o d Image Registration w i t h Subpixel Accuracy.  Applications of Digital Image Processing XI,  SPIE, V o l . 974:pp38-45, 1988. [Nal96]  V . N a l w a . A True Omnidirectional Viewer. Technical report, Bell Laboratories, Holmdel, NJ 0773S, USA, February 1996.  [Nay96]  Shree K . Nayar. Catadioptric Omnidirectional Cameras. Technical Report, Dept. of Computer Science, Columbia University, October 1996.  90 [Nor78]  A . Nordbryhn.  T h e D y n a m i c Sampling Effect w i t h C C D Imagers. In  Applications of Elec. Imaging Systems, volume 143, pages pp42-51. S P I E , 1978. [NPB98] Shree Nayar, Shmuel Peleg, and Rafi B r a d a . Apparatus. [SA87]  O m n i d i r e c t i o n a l Imaging  US Patent 5760826, page 24, June 2 1998.  K e n D . Sauer and J a n P . Allebach. Iterative Reconstruction of B a n d L i m i t e d Images from Nonuniformly Spaced Samples. IEEE Transactions on Circuits and Systems, V o l . 34(12):pp.1497-1506,  [SA94]  December 1987.  S. Shaw and J . K . Aggarwal. A Simple C a l i b r a t i o n Procedure for FishE y e ( H i g h Distortion) Lens Camera. In Proc. IEEE Int. Conf. on Robotics and Automation, pages pp. 3422-27. I E E E , 1994.  [S089]  H . Stark and P. Oskoui. High-resolution Image Recovery from Image-plane Arrays, Using Convex Projections.  J. Opt. Soc. Am. A, V o l . 6 : p p l 7 1 5 -  1726, 1989. [Spe84]  W . P . Spence. Engineering Graphics. Prentice H a l l , N J , 1984.  [SPS87]  Danny K e r e n S. Peleg and L . Schweitzer.  Improving Image Resolution  Using Subpixel M o t i o n . Pattern Recognition Letters, V o l . 5:pp223-226, M a r c h 1987. [ST90]  M . Ibrahim Sezan and A . M u r a t Tekalp.  A d a p t i v e Image Restora-  tion w i t h Artifact Suppression U s i n g the Theory of Convex Projections. IEEE  Transactions on Acoustics, Speech, and Signal Processing, V o l .  38(l):pp.l81-185, January 1990.  91  [TG94]  B r i a n T o m and Nikolas Galatsanos.  Reconstruction of a H i g h Resolu-  tion Image from Registration and Restoration of Low Resolution Images. Image Processing, 1994 Internation Conference, V o l . 3:pp.553-557, 1994. [TK94]  B . C . T o m and A . K . Katsaggelos. M u l t i - C h a n n e l Image Identification and Restoration Using the E x p e c t a t i o n - M a x i m i z a t i o n A l g o r i t h m . In Applications of Digital Image Processing XVII, volume V o l . 2298, San Diego, J u l y 1994. S P I E .  [TOS92]  A . M . Tekalp, M . K . Ozkan, and M . I . Sezan. High-Resolution Image Reconstruction F r o m Lower-Resolution Image Sequences and Space-Varying Image Restoration. IEEE Proc. ICASSP-92, V o l . 3:pp.169-172, 1992.  [Y 95] +  K . Y a m a z a w a et al. Obstacle Detection w i t h Omnidirectional Image Sensor H y p e r O m n i V i s i o n . IEEE Internal Conference on Robotics and Automation, pages pp.1062-1067, M a y 1995.  Appendix A Aberrations (see Chapter 2) Traditional optical design concerns m a x i m i z i n g the quality of mapping a planar object onto a planar sensor. There are several types of aberrations, called the Seidel aberrations. These aberrations are t h i r d order effects and include the phenomena of spherical aberration, coma, astigmatism, field curvature, and distortion. T h e y are represented by the coefficients A , B , C , - ( C + 2 D ) , and E respectively[Kor88]. O u r two mirror system (Fig.2.6) is similar i n form to a double parabolic collimator described by Korch[Kor88] and has the same Seidel aberration coefficients[Kor88]: A = 0 5 = -(r )- (c  (A.l)  2  1  C = 271(2*! + nd^B  Cl  2  1  1  1  1  where T\ = 1 — v\t\, V\ =  + nih  + c ) +  2  - c )  2  C l  2  C l  (A.3)  2  + c  l  1  (A.2)  2  2  + 2(^-(  D= -CE = - r t ( 3 * + 2r d )B  + c )  2 1  (A.4)  2  + ^~)C  + T&D - r  2 1  rf (^c 1  2 2  +  C l  + c ) (A.5) 2  c\ and c the vertex curvatures of the mirrors(ci = ^r-), 2  92  93 t\ the entrance p u p i l distance (ti = fi i n our system), d\ the mirror separation, s\ the object distance and f i i the ratio of the clear aperture of mirror 2 to mirror 1. Because A is zero, there is no spherical aberration i n our system. T h i s comes by virtue of using parabolic reflectors.  Please see Korch[Kor88] for a more detailed  expression of the coefficients. For our system w i t h an object at 5 metres the coefficients are as follows: B = 0.067-^ cm 2  C = 2.48— cm D - 1.06— cm E = 5.3  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Country Views Downloads
China 15 5
United States 12 1
Russia 7 0
Japan 2 0
Sweden 1 0
Denmark 1 0
France 1 0
India 1 0
City Views Downloads
Shenzhen 8 5
Unknown 6 0
Ashburn 6 0
Penza 4 0
Beijing 4 0
Guangzhou 3 0
Mountain View 3 1
Tokyo 2 0
Redmond 2 0
Chennai 1 0
Stockholm 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065287/manifest

Comment

Related Items