UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Vision assisted system for industrial robot training Sang, Tao 2005

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_2005-0627.pdf [ 9.14MB ]
JSON: 831-1.0080747.json
JSON-LD: 831-1.0080747-ld.json
RDF/XML (Pretty): 831-1.0080747-rdf.xml
RDF/JSON: 831-1.0080747-rdf.json
Turtle: 831-1.0080747-turtle.txt
N-Triples: 831-1.0080747-rdf-ntriples.txt
Original Record: 831-1.0080747-source.json
Full Text

Full Text

VISION ASSISTED S Y S T E M F O R INDUSTRIAL R O B O T TRAINING BY TAO SANG B A S c , The University of Waterloo, 2002 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIRMENTS FOR THE DEGREE OF M A S T E R OF APPLIED SCIENCE In THE F A C U L T Y OF G R A D U A T E STUDIES (Mechanical Engineering) The University of British Columbia September 2005 © Tao Sang, 2005 A B S T R A C T In an industrial environment, robotic manipulators are trained to perform repetitive operations at high speed. For safety reasons, the power to the robotic manipulator must be shut off whenever the operator enters the robotic work-cell. However, during training, human operators are required to enter the work-cell frequently to verify the manipulator positions. This cycle of "power-off-check-power-on-train" is time consuming and expensive. This work presents a self-calibrated vision system for human-guided robot training. The objective of the system is to simplify the training process thus reducing the training time and cost. The system assists robot operators in training industrial robots quickly, safely, and accurately. The system can be used in various industrial environments and requires no predetermined model of the working space. First, the system self-calibrates to compute the initial camera locations and orientations. Next, the user demonstrates the tool path that is to be followed by the robot arm using a 6-degree of freedom training medium. Finally, the system optimizes the camera locations so that, at those locations, the cameras can provide sufficient images for the operator to observe and perform further remote training of the robot as required without entering the workcell. The result shows that with the help of the vision system, the number of times that a human is required to enter the robotic work-cell is reduced to no more than 3 times for robot task training. At the same time, the accuracy of the training process is not compromised with the use of the vision system. i i T A B L E O F C O N T E N T S ABSTRACT ii TABLE OF CONTENTS iii LIST OF TABLES v LIST OF FIGURES vi NOMENCLATURE AND ACRONYMS viii ACKNOWLEDGEMENT x DEDICATION xi CHAPTER 1 INTRODUCTION 1 1.1 Background 1 1.2 Existing Technology and Literature Review 1 1.3 Motivation and Objectives 4 1.4 Thesis Overview 5 CHAPTER 2 DESIGN, SYSTEM CALIBRATION AND PATH GENERATION 6 2.1 Overview of the Training system 6 2.2 System Calibration and Path Generation Procedure Summary 7 2.3 System Calibration 8 2.4 Medium Design and Identification 11 2.5 Path Generation 14 CHAPTER 3 CAMERA PLACEMENT 19 3.1 Overview 19 3.2 Focus 20 3.3 Field of View 22 3.4 Resolution 24 3.5 Occlusions 25 3.6 3D Visibility Constraint 31 3.7 Summary 32 CHAPTER 4 EXPERIMENTS AND RESULTS 33 4.1 Overview 33 4.2 Experimental Setup 34 4.3 System Calibration 36 4.4 Path Generation 40 4.5 Camera Placement 41 4.6 Summary 50 CHAPTER 5 CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE WORK 52 5.1 Conclusions 52 5.2 Recommendations for Future Work 53 REFERENCES 55 iii A P P E N D I X A : S Y S T E M C A L I B R A T I O N M A T L A B P R O G R A M 58 A P P E N D I X B : M E D I U M I D E N T I F I C A T I O N M A T L A B P R O G R A M S 61 B . l : MAIN IDENTIFICATION PROGRAM 61 B.2: SUPPORTING PROGRAM - FINDING IMAGE INTENSITY INFORMATION 66 B . 3: SUPPORTING PROGRAM - IMAGE NOISE FILTER 67 A P P E N D I X C : P A T H G E N E R A T I O N M A T L A B P R O G R A M S 68 C l : MAIN PATH GENERATION PROGRAM 68 C . 2: SUPPORTING PROGRAMS - DOT PRODUCT EVALUATION 70 C . 3: SUPPORTING PROGRAM - MEDIUM ORIENTATION COMPUTATION PROGRAM 71 A P P E N D I X D : C A M E R A P L A C E M E N T M A T L A B P R O G R A M S 77 D. 1: Focus AND FIELD OF VIEW OPTIMIZATION PROGRAM 77 D.2: CAMERA ORIENTATION COMPUTATION PROGRAM FOR R O I VIEW 78 D.3: OCCLUSION AVOIDING MAIN PROGRAM 79 D.4: OCCLUSION SELECTION PROGRAM 84 D. 5: 3D VISIBILITY LIMIT TESTING PROGRAM 85 A P P E N D I X E : E X P E R I M E N T A L S U P P O R T P R O G R A M S 90 E. 1: M A T L A B PROGRAM COMPARING THE GRADIENT L E V E L OF IMAGES FOR FOCUS TESTING 90 E.2: PATH POINT ORIENTATION TESTING PROGRAM 92 iv L I S T O F T A B L E S TABLE 4-1: INITIAL ESTIMATES FOR CAMERA LOCATIONS 38 TABLE 4-2: COMPUTED CAMERA LOCATIONS (CALIBRATION SET 4) 38 TABLE 4-3: ERROR SUMMARY OF 15 TEST POINTS USING CALIBRATION SET4 (100 CALIBRATION POINTS) ...40 TABLE 4-4: ORIGINAL CAMERA LENS OPTICAL SETTINGS 42 TABLE 4-5: R O I POINTS COORDINATES IN WORLD SPACE 43 TABLE 4-6: CAMERA POSITION AND OPTICAL SETTINGS AFTER FOCUS, FIELD OF VIEW, AND RESOLUTION ADJUSTMENT 43 TABLE 4-7: COMPARISON OF F O V CONSTRAINT MEASUREMENT RESULTS 44 TABLE 4-8: OCCLUSION-FREE CAMERA LOCATIONS AND LENS SETTINGS 50 TABLE 4-9: 3D VISIBILITY TEST RESULTS 50 v L I S T O F F I G U R E S FIGURE 1-1: A PICTURE IS TAKEN FROM A ROBOT MOUNTED CAMERA DURING OPERATION 2 FIGURE 1-2: Two PICTURES TAKEN FROM 2 DIFFERENT ANGLES SHOW THAT DEPTH INFORMATION IS MISSING. 3 FIGURE 2-1: THIS FLOWCHART SHOWS THE CALIBRATION ALGORITHM PROCEDURE 7 FIGURE 2-2: THIS FLOWCHART SHOWS THE PATH GENERATION PROCEDURE 8 FIGURE 2-3: A DIAGRAM SHOWS THE FRAME ASSIGNMENTS AND RELATIONS BETWEEN ROBOT WORLD AND CAMERA WORLD 9 FIGURE 2-4: A DIAGRAM SHOWS THE GRAPHIC REPRESENTATION OF RELATIONS BETWEEN ROBOT, CAMERA, AND CAPTURED IMAGE 9 FIGURE 2-5: T H E MEDIUM IS MOUNTED ON A C R S ROBOTIC ARM 13 FIGURE 2-6: T H E IMAGE ISFIGURE2-5 AFTER COLOUR PROCESSING 13 FIGURE 2-7: A DIAGRAM SHOWS THE RELATIONSHIP OF MEDIUM IN WORLD FRAME AND IN IMAGE FRAME. ..15 FIGURE 2-8: MEDIUM AT THESE TWO ORIENTATIONS CAN GIVE THE SAME AXIS ORIENTATION DATA 18 FIGURE 3-1: CAMERA PLACEMENT PROCEDURE 20 FIGURE 3-2: OPTIC PARAMETERS FOR A CONVEX LENS 20 FIGURE 3-3: GRAPHICAL DEFINITION OF DEPTH OF FIELD 21 FIGURE 3-4: DIAGRAMS EXPLAINS THE DEPTH OF FIELD USING ZOOM LENS: (A) DISSATISFY; (B) SATISFY. ...22 FIGURE 3-5: FIELD OF VIEW: A/2 IS HALF OF THE FIELD OF VIEW ANGLE 23 FIGURE 3-6: T H E DIAGRAM EXPLAINS THE DEFINITION OF RESOLUTION ANGLE 25 FIGURE 3-7: T H E HIDDEN-LINE PROBLEM APPLIED TO SOLVING THE VISIBILITY PROBLEM 27 FIGURE 3-8: DOTTED CIRCLE IS THE ACTUAL CAMERA VIEW; SOLID RECTANGLE IS THE USER VIEW; GRAY OBJECT IS THE OCCLUDING OBJECT BOUNDED BY THE ASSUMED CONVEX POLYGON 28 FIGURE 3-9: A POSSIBLE OCCLUSION-FREE VIEW IS GENERATED BY MOVING FROM THE ORIGINAL OPTICAL CENTRE. T H E ARROW SHOWS THE ACTUAL MOVE 28 FIGURE 3-10: T H E DASHED ARROW SHOWS THE DIRECTION OF THE CAMERA MOTION TO AVOID OCCLUSION WITH MINIMUM TRAVEL 29 FIGURE 3-11: DIAGRAM FOR COMPUTATION OF THE MINIMUM CAMERA MOTION 30 FIGURE 4-1: T H E FLOWCHART SHOWS THE OVERALL SYSTEM PROCEDURE 34 FIGURE 4-2: T H E PICTURE SHOWS THE EXPERIMENTAL SETUP 35 FIGURE 4-3: T H E SCHEMATIC SHOWS CAMERA LOCATIONS RELATIVE TO THE ROBOT ARM IN EXPERIMENTAL SETUP 36 FIGURE 4-4: T H E DIAGRAM SHOWS THE EQUIPMENT CONNECTIONS 36 FIGURE 4-5 : T H E DIAGRAM SHOWS THE LOCATION ERROR SUMMARY 39 FIGURE 4-6: T H E DIAGRAM SHOWS THE ORIENTATION ERROR SUMMARY 39 FIGURE 4-7: THESE IMAGES EVALUATE THE ACCURACY OF THE PATH GENERATION PROCESS 41 VI FIGURE 4-8: T H E IMAGE IS THE ORIGINAL VIEW BEFORE RUNNING CAMERA PLACEMENT ALGORITHM 42 FIGURE 4-9: T H E IMAGE IS TAKEN AFTER ADJUSTMENTS TO OPTIMIZED FOCUS, FIELD OF VIEW, AND RESOLUTION 43 FIGURE 4-10: T H E IMAGE IS TAKEN AT D = 2000MM 45 FIGURE 4-11: T H E IMAGE IS TAKEN AT D = 3000MM 45 FIGURE 4-12: T H E DIAGRAM IS THE GRADIENT GRAPH FOR VIEWING IMAGE D = 2000MM 46 FIGURE 4-13: T H E DIAGRAM IS THE GRADIENT GRAPH FOR VIEWING IMAGE D = 2350MM 46 FIGURE 4-14: T H E DIAGRAM IS THE GRADIENT GRAPH FOR VIEWING IMAGE D = 3000MM 47 FIGURE 4-15: THESE IMAGES ARE THE SIX DIFFERENT TEST CASES FOR OCCLUSION-FREE CONSTRAINT 48 FIGURE 4-16: THESE IMAGES ARE TAKEN AFTER APPLYING THE OCCLUSION-FREE ALGORITHM TO THE IMAGES OF FIGURE 4-10 49 vii N O M E N C L A T U R E A N D A C R O N Y M S Parameters Physical Property a Lens aperture setting. B Point on the edge of image used for field of view evaluation. c camera image CCD pixel size. D Lens object distance. D(x, y, z) Displacement matrix. d Lens image distance. E Field of view centring evaluation index. / Lens focal length. h Image vertical pixel size. / Field of view inclusiveness evaluation index. Luv Distance from medium centre to image centre in pixel unit. M Scale factor converting distance from pixel unit to standard length unit. P Or with superscript represents point in general. Pmjn Minimum image plane dimension. Rsubscript Rotational Matrix. r Radius of the field of view inclusive sphere. S Change in camera position. T Transformation matrix between the frames indicated by its subscripts. u Pixel frame horizontal unit, v Pixel frame vertical unit. viii w Image horizontal pixel size. X°, Y c , Z c Camera frame direction assignment. a Field of view angle. <j) Camera orientation pan angle. <j? Camera orientation pan angle after adjustment. 0 Camera orientation tilt angle. 0' Camera orientation tilt angle after adjustment. € Angle formed by the medium axis and the image horizontal axis. $,5,1^ Roll, Pitch, and Yaw angles. ju, Resolution angle. 0 Set of occlusions in the camera's field of view. Acronyms Definition CCD Charged Coupled Device. FOV Field of View. M V P Machine Vision Planner. RGB Red-Green-Blue. ROI Region of Interest. ix ACKNOWLEDGEMENT The very special person who deserves a veryspecial thank you is Dr. Elizabeth Croft. She bravely took me as a student and everything started from there. Not only she took good care of me academically, most importantly from time to time, it was her personal support that helped me going over the many life hurdles. She patiently guided me through the project. While encouraging my independent thinking and innovation, she also tirelessly kept me on the right path. Words simply cannot express the thankfulness from my heart to Dr. Croft. I must also thank Dr. Jonathan Wu, Mr. Kevin Stanley of NRC, and Mr. Babak Habibi of Braintech Inc. of North Vancouver for initiating the project. Also, an extra thank you goes to Braintech for funding the project. Gord Wright and Dan Miner must also be recognized. Their endless technical support helped the project along. I also have to thank my family for their support. First person is my mother Anne H. Y . Zhang who encouraged me to pursue graduate studies and poured in her support unconditionally for the entire duration of my studies. I thank my wife Daisy H . Y . Zhang for all the joy she brought in my life which constantly motivates me to move forward. And I thank all the other family members who contributed with their support. Also should be recognized are the fellow students: Dr. Daniela Constantinescu, now with the University of Victoria, the most humorous Bi l l Owen who brings laughter to the lab, and last but not least, Dana Kulic whose knowledge and working habits keep me admiring. x DEDICATION This thesis is dedicated to my grandmother, F.C. Liu, and my late grandfather, Z.D. Zhang by whom I was brought up. Their blessings are with me every minute of my life. Chapter 1 I N T R O D U C T I O N 1.1 Background Industrial robots are used worldwide to increase the efficiencies and production volumes in manufacturing operations while lowering production costs. Robots can provide flexible, reconfigurable materials handling solutions for a wide range of pick-and-place and trajectory following applications over a relatively large workspace. However, this same flexibility and range introduces safety concerns for human operators working at or near the robot work-cell. Safety regulations have been introduced to protect human operators' well-being and ensure production success. The current industrial standard ANSI/RIA 15.06[1] requires at least 18 inches of clearance between the edges of the robot work envelope and any structures of the work-cell that is to be entered by operators when the robot is powered. In industries such as automobile manufacturing, large-scale robotic work cells for welding, sealing, and assembly operations are utilized. In such cells, the necessary dimensional precision for these operations can be relatively small. Often, human operators work within the work cell for training and fine-tuning the robot tasks. This creates difficulties during the training operation, as the operator is not permitted to watch from "close up" the position of the robot with respect to desired operating points when the robot is moving at regular speed. The current solution is to power down the robot during training in order to allow operator to enter the cell and confirm the arm location. After operator leaves the cell, the robot is powered up again to make the next move. This cycle may be repeated several times during a single task programming session, significantly adding to the time and cost of set-up or reprogramming of a robot work cell. 1.2 Existing Technology and Literature Review Presently, machine vision is used in and around robotic work-cells for supervision and control purposes. Typically, stationary cameras mounted at the periphery of the work-cell 1 are used for supervision, while end-effector mounted cameras are used for control. End-effectors mounted cameras, however, may not provide the appropriate visual information that human operators can use for guiding the robot. As the robot moves in to perform a task, the view may be too close to get an appropriate perspective. Figure 1-1 shows an image taken from an eye-in-hand camera while the tool is approaching the target. Based on such a view, it is almost impossible to judge the relative distance from the tool tip to the target hole. Figure 1-1: A picture is taken from a robot mounted camera during operation. Single static mounted cameras, on the other hand, may not provide sufficiently precise information, for human operators may choose different perspectives to have the best view. A simple case study illustrates this point. From Figure l-2(a), it appears that the tool tip is directly above the marker on the ground. For the same robot configuration and setup, Figure l-2(b) shows otherwise. For large areas of interest, it has been found that at least two monocular cameras are required for successful positioning estimation [2]. There are number of existing machine vision systems available in industry. The eVisionFactory system by Braintech Inc. is a vision platform that locates a given target for the robot to move towards [3]. In Germany, the HP-FIT (High Precision Fitting) 2 system has been developed by the Automationssoftware GmbH et. al [4]. This system uses an eye-in-hand camera to measure the dimensions of object to be grasped and minimizes positioning error by examining relative positions between objects. Gonzalez-Galvan, et. al.[5] use laser as the medium to facilitate the image processing aspect of their multi-camera, multi-target 3D control scheme. Saito[6] invented a robot operation teach method using a pointing device which has at least two degree of freedom to perform robot training. This system requires a pre-generated geometric model of the work environment. (a) (b) Figure 1-2: Two pictures taken from 2 different angles show that depth information is missing. The use of a medium, namely, a specially designed artifact introduced into the region of interest for registration and calibration purposes, is popular in performing sensor calibration [22], [23], [25]. It is also used when no models of the workspace are obtained prior to sensor placement [5]. The most common form of a medium is a uniform grid generated either by laser [5], point light sources, or a printed planar grid [3] [23]. In this case, a small number of images are needed to perform calibration. In other cases, a single light source is used but a greater number of images are required [5] [22]. The above mentioned works do not address the question of camera placement optimization. It is important to address the problem of camera placement for the simple reason that the feedback images being used by the operator for tele-training and control purposes must satisfy the operator's desire. In other words, the images provided by the cameras have to give operator the context of the environment, objects, and depth information, etc., so that the operator can perform tasks accurately. Camera placement, also referred to as sensor placement, is an active area of research at the present. The work can be categorized into two sections: model based camera placement, used for object reconstruction and modeling; and non-model based camera placement, used in product assembly, inspection, measurement, etc. [7]. The work presented here is concerned with model based camera placement. Tarabanis conducted a survey of the early works of model based sensor placement [8]. One of the early works on sensor placement, by Cowan and Kovesi[9], focused on the placement constraints, such as resolution, focus, field of view, and visibility. These constraints were also used by Tarabanis et al.[10-13] to develop the Machine Vision Plarmer(MVP) sensor planning system and other applications. This system obtains 2D images using a CCD (Charge Couple Device) camera instead of finding the solution by a generate-and-test approach. 1.3 Motivation and Objectives The aim of this research project is to find and develop methods for implementing a vision system that can assist human operators performing vision-guided tasks, such as robot path training, while operating from outside of a large robotic work-cell. The above goal can be achieved in four stages: First, an automated system calibration method is used to relate the system to the robot world coordinate; second, an automated path training system generates the path which the robot needs to follow; third, an automated camera placement system computes the possible location and setup of the cameras to satisfy the constraints and criteria of viewing; finally, human interaction allows human operator to make adjustment to the camera placement results to optimize the view of the robot task. By achieving the above, the aim of this project is to decrease the time spend on robot training by decrease the number of times operator needs to enter and exit the robotic work-cells. In addition, it will decrease cost of production by speed up the process and relax the safety constraints. 4 1.4 Thesis Overview This research work considers involve camera positioning and re-positioning as the primary area of research interest. This will provide optimal viewing solutions to the operators. In addition, issues regarding visualization, trajectory planning assistance, and tele-operation control will also be considered in a supporting role, to assist operators performing different tasks. The following is an outline of the thesis: Chapter 2: Design, System Calibration, and Path Generation: The design of the training system is presented in this section. The procedure for system calibration including path generation and the design of the medium is discussed. Chapter 3: Camera Placement: The camera placement scheme is discussed here. Each measurement parameter is introduced. They are focus, field of view, resolution, and visibility. In addition, the overall scheme function for selecting the optimal camera position is presented. Chapter 4: Experiments and Results: This chapter includes the calibration results, camera placement results, as well as the integrated system test results. Chapter 5: Conclusions and Recommendations for Future Work. 5 Chapter 2 D E S I G N , S Y S T E M C A L I B R A T I O N A N D P A T H G E N E R A T I O N A typical operation for an industrial robot is a "pick and place" task - namely, picking up an object at one location and placing the object at another location. In order to train the robot to perform such a task, the operator needs to generate a path for the robot to: (i) approach the object, (ii) perform a grasping manoeuvre, (iii) follow another path to approach the final destination, and (iv) release the object. This operation can be broken down into a number of simpler general tasks: point-to-point motions, guarded motions, and grasping actions. The first two of theses general tasks can be broadly classified as path generation tasks. In the following two chapters, a vision-guided, robot-training system is described. In brief, this system aims to assist the operator to generate the paths for the robot to follow. In addition, the system identifies optimal camera locations to obtain images for the operator to view and adjust robot-grasping manoeuvre. 2.1 Overview of the Training system The training system described herein makes use of two cameras, a medium, and a robot-arm system. It has 4 stages: (i) calibration, (ii) path generation, (iii) camera position generation, and (iv) human interaction. The system calibrates itself using the medium mounted on the robot tool tip. In this first stage, the system calculates the camera positions. In the second stage, the system uses the medium to generate a path for the robot arm to follow. During the third stage, the system uses the given target region information and requests the user to select any occlusions in the workspace in order to generate the best camera position for viewing. Finally, given the optimal selected view of the work environment, the operator takes control of the robot arm. The operator then uses the camera views to supervise the task at hand. In the following sections, the methods developed for calibration and path generation for vision-guided training are discussed. 6 2.2 System Calibration and Path Generation Procedure Summary During the calibration process, the operator first positions the camera on stands and mounts the medium on the robot arm. The robot is moved to a number of calibration points where the roughly positioned cameras capture the points. The images of these points are then processed, and using the given and processed point information, the system finds the calibrated positions and orientations of the cameras. The algorithm is shown as a flowchart in Figure 2-1. Once the camera positions and orientations are known, the operator holds the medium to define a series of path points while the cameras capture these images. The images of the path points are processed and the positions of these points are computed by the system so the robot arm can follow the path. This procedure is also used to generate the region of interest information for the camera placement procedure discussed in the following chapter. The algorithm procedure is shown as a flowchart in Figure 2-2. Camera takes images of calibration points represented by medium System processes System computes images to compute medium pixel actual medium location with location in pixel frame estimated data \ Compare actual location with computed results Register camera location with the ^ yes latest value Medium location data in world frame Initial estimated , camera location I no Update camera w location estimated data Figure 2-1: This flowchart shows the calibration algorithm procedure. 7 System processes images to compute actual medium location and orientation in pixel frame Figure 2-2: This flowchart shows the path generation procedure. The system calibration and path generation procedures set the stage for the camera placement step. Once these steps are done, the operator no longer needs to enter the robotic work-cell. 2.3 System Calibration One of the key advantages of the proposed vision system is that the cameras in the system can be placed initially at unknown locations. Therefore, the first step is to automatically compute the camera locations, through a system calibration procedure. This procedure uses known robot end-effector locations to calibrate the camera locations. In other words, calibration builds a relationship between the robot base frame and the camera frames, namely, the transformation, T w . c , as shown in Figure 2-3. Herein, for convenience, the robot base frame is also defined to be the world frame. In the camera frame, the viewing direction from the centre of the lens is assigned as " Z c " . By convention, " Y c " is pointing towards the bottom of the camera, and " X c " is pointing to the right of the camera, as viewed from the rear, to form a right hand frame. In the following, representations in the world frame are indicated with a superscript "w" and in the camera frame are indicated with " c l " for camera number 1, "c2" for camera number 2, etc. Cameras take images of path points represented by operator holding the medium 8 Robot base frame (World frame) Camera Frame Figure 2-3: A diagram shows the frame assignments and relations between robot frame and camera frame. In the proposed method, the cameras capture a large number of images of the robot end-effector at different known world frame locations. The location of the end-effector in the image is represented in 2D image space by pixel (u, v) coordinates. By convention, the u and v coordinate have their origin from the top left hand corner of the camera image. This relationship is illustrated in Figure 2-4. r Camera z ^ ^ r u + T P=(xw, y w , zw) (xc, y c , zc) Robot end-effector w --»P(u,v) ^ * xc r Captured Image Image of end-effector Figure 2-4: A diagram shows the graphic representation of relations between robot, camera, and captured image. 9 Here, it is assumed that the camera orientation is limited to only pan and tilt (i.e., a standard pan/tilt mount) such that X c is always parallel to the X Y plane in robot world frame. Therefore, the transformation from the world frame to the camera frame is formed by a linear translation from O w to 0°, followed by a rotation around the current Z axis so that the current X axis is collinear with X c . Finally, the current frame is rotated around the current X axis so the Z axis is pointing along the camera's viewing direction. The overall transformation is: T w . c = D(x w , y w , z w )R z (^ )R x (0 ) Therefore, points in camera frame can be represented as follows: P W = (TW.C)-' P c (2.1) (2.2) Or, M c y = T 1 w-c y pt pt (2.3) In a captured image (w x h pixels), the distance, in pixels, from the point observed to the centre of the image (origin of X Y in camera frame) is, ^ = ( ( « - y ) 2 + ( v - | ) 2 ) " 2 (2.4) where, u and v are the coordinates of a point observed in the image frame. The computation of these points is discussed in the next section. Also, •M *-2x1/2 P t ' ' P t (2.5) 10 where, M is the scale factor converting viewed physical distances to pixels. Since the camera's viewing direction is designated as the Z c direction, z c is always positive. Substituting Equation (2.3) into Equation (2.5), yields, Luv=f(M,<p,0,xw,yw,zw) (2.6) Equation (2.6) gives the basic relationship between a point shown on an image and the same point in world frame. Using a Newton's Least-Square Iterative technique, over a large number of images, the camera positions in world coordinates can be computed. The calibration programs are done using M A T L A B . The M A T L A B code for the calibration program is given in Appendix A . 2.4 Medium Design and Identification One of the challenges for the calibration method described in the previous section is to be able to identify the location of the end-effector observed from the camera images. Since the system is to be used in an industrial environment, the images can be taken with an uncontrolled random background and under various lighting conditions. This creates a difficulty in identifying a given end-effector location which has both Cartesian and orientation coordinate (6 degrees of freedom). The location to be observed must be in a unique form that can be separated from the rest of the image robustly and quickly. In summary, the medium needs to satisfy the following requirements: Represent both location and orientation information. Be uniquely visible under various lighting conditions using standard equipment. The obtained image must be relatively easy to process using standard image processing software. Low weight and size for ease of handling by both the robot and operator to handle. 11 In a robotic work-cell, location and orientation information is typically represented together by a X Y Z frame. Therefore, the proposed medium has the shape of a X Y Z frame. To achieve this aim, three orthogonal, intersecting lines must be visible. Options include using a chain of point-like light sources such as LEDs or using tube shaped light bulbs. Visibility and identification requirements are met by selecting the frequency range and intensity of the light sources. Normal light bulbs are not unique under normal lighting conditions. L E D light sources are point sources and often not strong enough to be visible in a cluttered environment. Laser and neon lights are single frequency light sources that can be easily processed and identified. Lasers, however, are generally point light sources unless projected through a grid onto a surface. Neon lights, on the other hand, come in various shapes, are readily available and relatively inexpensive. For these reasons, in this work, the medium is constructed from three neon light tubes in three different colours: green, red, and blue, shown as in Figure 2-5. These three colours were selected to correspond with the red, green, and blue sensitive CCD frames associated with typical colour digital cameras. The process of identifying the medium is done in M A T L A B and makes use of the Image Processing Toolbox. First, after an image of the medium is taken, the three colours are filtered to three images containing only the coloured "blobs" wanted, i.e. red only, green only, or blue only. The properties of the filtered coloured "blobs" are also calculated, i.e. their centroid and orientation, and major and minor axes. Since these images may contain artefacts such as reflections or other objects that have the same colours as the medium axes. Further filtering is done using shape and area analysis. Only the tube-like shapes are kept. The three images then are cascaded on top of each other, shown in Figure 2-6. Using the centriod and orientation properties previously calculated, the intersection of all three neon light tubes can be calculated by computing lines passing the centroid of each colour of light tube with the same slope of the light tube, as shown in Figure 2-6. 12 Figure 2-5: The medium is mounted on a CRS robotic arm. There are cases when one of the lights is hidden or too dim to be separated from the background. In such cases, the other found axes are used to find the intersection of the medium, thereby, locating the u and v location of the medium in the image. However, in the path generation stage, described in Section 2.3, i f all three lights are not recognized, it is not possible to calculate orientation of the medium itself. Therefore, in that stage, any image without all three axis recognized is discarded and the image must be re-acquired. Figure 2-6: The image is Figure2-5 after colour processing. 13 The M A T L A B code for the medium identification program is given in Appendix B. This identification process is used many times through out the system, whenever the medium is used during the operation. 2.5 Path Generation Once the camera positions are estimated via the calibration program described in Section 2.2, the two-camera system can be used to estimate the location and orientation of points represented by the medium in the robot work-cell. First, the user demonstrates a set of end-effector poses (positions and orientations) within the robot work-cell. These poses are captured by the vision system. Then, the system computes the location and orientation of each captured pose. Finally, the computed points are fed back to the controller for the robot to follow. To compute the location of a point, first, the transformation between the (u, v) image coordinates for the centre point of the medium, and the (x y z) camera coordinates are rewritten from Equations (2.4) and (2.5) as, Thus, for a two camera system observing a single point p, defined in the world frame as pw and in each camera frame as pcl and pcl, there are four equations (two for each camera) and 3 unknowns -pwx, pwy, andpwz, are the three x y z coordinates of the point in world frame. To select which of the three equations can give the most accurate solution, the relative orientation of the two cameras is examined. An orthogonal separation between the optical K - w / 2 = M * ( X « /z<) (2.7) and, v-h/2 = M*(y%/zcpl) (2.8) 14 axes of the two cameras will minimize errors in position estimation [2]. However, this is not always possible. As configured, the u and v directions on each camera's image plane are parallel to the camera's X and Y frame directions. So the four equations can be considered representing four axes X c l , Y c l , X c 2 and Y c 2 . Since the image is projected onto the X Y plane of the camera, the best combination of the three of the above four equations correspond to the three camera axes that are closest to orthogonal. A vector dot product between the axes of each camera is used to rank the three most orthogonal axes, which correspond to the three selected questions from Equations (2.7) and (2.8), computed for each camera, and used to compute the location of the points in the workspace. Next, for each path point, the orientation of the end-effector is computed. The operator uses the medium to represent the desired orientation of a point by positioning the medium arms correspondingly. To compute the medium orientation, its relative orientation in the 2D image plane is required. The relationship is illustrated in Figure 2-7. Note that a constructed image plane, namely, the "shifted image plane", is introduced during this computation process. This construction is necessary so that the 3D orientation of the medium can be translated into 2D representation. camera Figure 2-7: A diagram shows the relationship of medium in world frame and in image frame. The orientation of the shifted image plane is adjusted depending on the location of the medium image relative to the centre of the image. If the centre of the medium image is 15 coincident with the image centre, the shifted image plane is coplanar with the image plane. If the centre of the medium is not centred on the camera image plane, then the orientation of the shifted image plane is adjusted to reflect this offset. From the original pan, $, and tilt, 6, of the camera/image plane, the orientation offsets are represented by Equations (2.9) and (2.10). 0'=0 + asin(^—^) (2.9) M 0'=0 + asm(—) (2.10) M The offset pan and tilt angles, ip' and 9', are used to compute a revised transformation matrix between the shifted image plane and the world frame. This transformation becomes, Tw_cl = D(x\yw,zw)Rx(<p')Rx(e') (2.11) The transformation matrix that represents the medium orientation in the world frame is defined as, Tw.m, where the distinct coloured light tubes (green, red, and blue) are herein assigned as, X m , Y m , and Z m axes respectively to form a right-handed frame. Then, the transformation matrix from a camera frame to the medium frame is, T =T T (2 12) c'i-m c'i-w w-m V / Each light tube of the medium is projected on the shifted image plane. The projected light tube image forms an angle '6; ' measured from the horizontal axis of the image. Therefore, for each camera, there are three equations, tan(gj = r c , - m ( 2 , 1 ) (2.13) " r c , , m ( l , l ) 16 tan(£, v) = ,(2,2) „(1>2) (2.14) tan(^,z) = W 2 . 3 ) W U ) (2.15) Using both cameras, six equations based on Equations (2.13) to (2.15) are obtained to solve for the three dimensional orientation of the medium. Because the transformation is homogenous, the equations obtained can be solved similar to solving an inverse kinematics problem, for the roll, pitch, and yaw angles of the medium [16]. As an example, because the matrix is entirely known, the submatrix, Rc-i.m of TC'i-m can be computed as Equation (2.16) using the Yaw-Pitch-Roll convention (where c represents cosine, 5 represents sine, <P is the yaw angle, 5 is pitch angle, and \p is the roll angle). Depending on the convention of the transformation parameters, the transformation matrix has different formats. However, in both the fixed frame and Euler angle transformations, there is always a pair of elements (such as the element (1,1) and (1,2) in the Equation (2.16) below, for the fixed frame case) that contain only two of the three parameter angles so that Equation (2.13) can be expanded as shown in Equation (2.17). Therefore, Equation (2.13) is first solved for each camera to yield the yaw angle, and then substituted into the other equations to resolve the roll and pitch angles. f Rdi-m ~ r r 11 12 V r r r 21 22 23 r r r \\ V 31 32 3 3 A cf8 - y y + c ^ V Yw+C^w sf8 cfy + s</>s5sy/ - cfv + s<l>s8cy C s S r5> cScy/ J (2.16) tan(e i c) = 11 <P ° 12 s±c ^8 21 0 8 22 9 8 23 d (2.17) 17 The advantage of this approach is that the solution depends on measurements from both cameras. In addition, as typical in such problems involving inverse kinematics, there are multiple sets of solutions at the end of the computation. An example of this case is illustrated in Figure 2-8 for two different medium poses. The light tube image orientations, i.e. the '£;' angles, are the same for both images. Using two cameras, i.e. two images, from two perspectives, and the relative position information of the three light tubes with a right hand frame solution, it is possible to resolve the problem to the correct solution. Specifically, as shown in this case, the centroid of each of the light tube is found first. Then using each set of the orientation solutions generated by the computation algorithm, the medium is brought back to the constructed 2D image that corresponds to the particular set of solutions. If the medium arm locations on this constructed image match the actual image, then the correct solution is identified. Medium 1 Medium 2 Figure 2-8: Medium at these two orientations can give the same axis orientation data. Using this methodology for calculating the location and orientation of the medium, an operator can hold the medium to represent any point of the path that the robot needs to follow to perform path generation. The path point location and orientation calculation is performed using M A T L A B . The M A T L A B code is listed in Appendix C. 18 Chapter 3 C A M E R A P L A C E M E N T 3.1 Overview The objective of the camera placement process is to provide to the user, optimized images of the robotic tasks that can assist the user to guide the robot through fine motion operations without entering the work-cell. During such operations, the user is usually only concerned with the operation of the robot over a small portion of the workspace. Therefore, a region of interest (ROI) is defined to include the portion of the viewable workspace required to guide the task including all the parts or targets that are involved in the task. The ROI is specified using the same techniques and computation processes as for specifying path points. Herein, the ROI is defined by a set of vertex points plus a centre point. The user holds the medium representing the required point locations. The path generation algorithm is used to calculate the location of these points. The constraints used to optimize individual camera viewing positions are focus/depth of field, field of view(FOV), visibility/occlusion-free, and resolution. Zoom lenses are assumed to be available on the cameras. As described in Chapter 2, the cameras are assumed to be pan-tilt only such that the horizontal axis of the image frame is always parallel to the world X Y coordinates. Cowan and Kovesi [9] described the formulations for all four constraints in the case of non-zoom lens. In this work, these equations are modified to apply to zoom lenses and used to compute optimized camera locations. The following four sections discuss these four constraints as applied to the present problem. Practically, focus, FOV, and resolution constraints can be solved together. Since the occlusion-free algorithm requires user input, it must be performed separately. In addition, the occlusion-free algorithm requires the cameras to be centred and focused on the region of interest before asking the user for input, the other three constraints must, therefore, be computed first. Figure 3-1 shows the flowchart which explains the camera placement procedure. 19 User defines ROI using medium System computes ROI information w System computes and moves to new camera orientations and settings to satisfy focus, zoom, resolution constraints ft J h Camera location and settings data 3.2 Focus System provides operator with current viewing images Operator selects any occlusion Figure 3-1: Camera placement procedure. System computes and moves to new camera location, orientation, and optical settings to avoid occlusion Focus can also be understood as depth of field. For an object being observed by a camera with focal length/, and image distance d, from an object distance D, as shown in Figure 3-2, it is said that the object is in absolute focus if, 1- J _ _ _ L d~ f D (3.1) I T Image Plane V Image : < — d Object D > Figure 3-2: Optic parameters for a convex lens. 20 However, in practice, there exists a depth of field range. If the camera observes the object within this depth range, the object is considered in focus, otherwise, the object is out of focus. In the following equations, a denotes the aperture setting of the lens and c denotes the camera image CCD pixel size. Then the maximum and minimum depth of field are computed respectively as, Daf af-c(D-f) (3.2) and, Daf af + c(D-f) (3.3) where Dj and D2 are the maximum and minimum ranges of the focused field of view. As shown in Figure 3-3, i f a point at the object distance, D, is in absolute focus, then any object that is located in between D/ and D2 is considered in focus. Therefore, (D1-D2) is defined as the depth of field. Figure 3-3: Graphical definition of depth of field. 21 In the case of zoom lenses, the focal length,/ can be adjusted. To compute constraints on the focal length, the camera location is assumed fixed relative to the ROI; that is, D, the distance from the camera location to the centre point of the ROI, is constant. In addition, since the ROI is defined with vertex points, let D 'j be the distance from the camera location to the furthest vertex point, and D '2 be the distance that is from the camera location to the closest vertex point. Then, as shown in Figure 3-4, it is required that, D\ < A n D\>D 2 ' (3.4) The desired focal length is the maximum value satisfying Equations (3.2), (3.3) and (3.4) Depth of Field Depth of Field (a) (b) Figure 3-4: Diagrams explains the depth of field using zoom lens: (a) dissatisfy; (b) satisfy. 3.3 Field of View The field of view (FOV) constraint ensures that the entire ROI is within the field of view of the camera. The field of view angle, a, is defined as, then, a = 2arc tan(^ L ) , 2d (3.5) also shown in Figure 3-5, where P m i n is the minimum image plane dimension, again, refer to Figure 3-2 22 Figure 3-5: Field of view: all is half of the field of view angle. As mentioned in Section 3.1, the ROI is defined by a centre point and a number of vertex points which define the edges of the region, and D be the distance from the camera location to the centre of the ROI. Dimension r is defined as the longest distance from the centre point to any of the vertex points in the ROI. This ensures that all parts of the ROI are included in the viewing circle circumscribed by r about the center point. Then as shown in Figure 3-5, a = 2arctan(-^). (3.6) Combining Equations (3.5) and (3.6) yields, 2d D (3.7) Solving for d, the image distance, from Equation (3.7) and substituting into Equation (3.1) yields: P D f= ™" (3.8) 2r + Pm i n For a zoom lens, the result of Equation (3.8) gives the maximum focal length that will include the entire ROI in the camera view. 23 To evaluate the FOV constraint analytically, the image from the original view is compared with the image obtained after the adjustment. Specifically, the ROI centre is compared with the actual image centre and all vertex points are examined. Equation (3.9) is used to evaluate the centre and Equation (3.10) is used to evaluate the vertex points. In these equations, E is the measurement of ROI centre vs. Image centre, therefore, i f the ROI centre, CROI, coincides with the image centre, Cimage, E is maximized. If the ROI centre is not included in the image at all, E is 0. E = 1 - C r o ' C i m o g e (3.9) ^ image For inclusiveness, I, i f the number of indication points is less than the total number of the indication points, then lis 0, otherwise, Equation (3.10) applies. P is the indication point that is closest to any edge of the image on the image; B is a point on the edge of the image closest to P which BP is parallel to at one of the image edges. If the indication point is exactly on the edge of the image, then / is 1, it is the maximum value. P-B I = \ (3.10) B Therefore, the overall FOV constraint can be represented by Equation (3.11), where FOV is between the value of zero and 2, zero being the worst, and 2 being the best. FOV = E + I (3.11) 3.4 Resolution Resolution is defined by actual physical length per pixel. The resolution angle, \x, is formed by the unit diagonal length of a pixel at the lens centre, as shown in Figure 3-6. 24 Figure 3-6: The diagram explains the definition of resolution angle. The larger the resolution angle, [i, the better the image resolution. Theoretically, / i is defined by Equation (3.12) where d and c are as defined in Section 3.2. 4ic2 ju = 2 arctan( ^ ). (3-12) Combined with Equation (3.1), /x is maximized by maximizing focal length. If no particular resolution requirement is specified, a larger focal length (up to the focal limits of the camera) is desirable to maximize resolution. 3.5 Occlusions Occlusion occurs when some portion of the ROI is blocked in the camera's field of view. Occlusion constraints can also be called visibility constraints. During robotic operations, occlusions can be caused by a structure that is part of the work-cell or the work-cell environment, e.g. a stationary occlusion. Occlusion can also be caused by the robot arm itself or the parts that are to be maneuvered by the robot arm. These are mobile occlusions. The objective of this computation is to compute an optimal, occlusion free position, orientation, and focal length for a viewing camera observing a task in a defined 25 region of interest. Optimality is defined by minimizing the required change in position of the camera, i.e.: min(S) s.t. {e e 0} n {(ROI) e (FOV)} (3.13) Where S is the position change of the camera and 0 is the set of occlusions in the camera FOV. Existing works address occlusions by having a detailed model of the entire work-cell and work-cell environment available apriori [7-14]. However, such approaches are not suitable for many industrial robotic work-cells where features, structures, targets, and tasks are regularly changed. Therefore, the proposed strategy is to obtain user input during task monitoring regarding occlusion information. In other words, the user will identify the occluding object and the system utilizes that information to compute the optimal moves for the camera to avoid the occlusion. The hidden-line problem in graphics is used herein to compute the occlusion-free viewing regions. It is assumed that the occluding objects are a convex polygon shaped. Figure 3-7 is a two-dimensional graph that illustrates this approach. In the figure, the thick line is the object that is being observed and the gray oval is the occluding object. Two lines are subtended from either end of the observed object and pass tangent to the occluding object. The space is then separated into three viewing regions, labeled A , B , and C. The view of the camera is at least partially blocked if the camera is placed in the region labeled as A. The camera views will be occlusion-free if the camera location is in either region B or C. The aim, when the camera view is occluded, is to move the camera to the occlusion-free location requiring minimum travel. In the case of Figure 3-7, region B is closer to the original camera location and thus is the preferred occlusion-free location. 26 0 r i S i n a l Location Camera Location Figure 3-7: The hidden-line problem applied to solving the visibility problem. In this work, it is assumed that the camera mount has N > 2 degrees of freedom; i.e. it is free to move with respect to the world frame, in addition to pan and tilt orientation. NOTE: It is necessary to have this assumption to avoid any occlusion. If the camera has only pan and tilt motions, it cannot achieve an occlusion-free view of the same target if the target in the current view is occluded. Using the pinhole camera model, the view of a camera is cone shaped. Therefore, even though the image provided to the user for occlusion identification is rectangular, the actual camera image is a circle, as shown in Figure 3-8. With the occlusion object in the current view, a new camera location is needed to exclude the occlusion object. One possibility is shown in Figure 3-9. 27 Possible Occlusion Camera Field of View User Field of View Image Centre Figure 3-8: Dotted circle is the actual camera view; solid rectangle is the user view; gray object is the occluding object bounded by the assumed convex polygon. Figure 3-9: A possible occlusion-free view is generated by moving from the original optical centre. The arrow shows the actual move. The direction of motion must be selected to achieve minimum moving distance. Referring to Figure 3-7, the boundaries that separate the three regions are tangent to the occluding object. For regions A and B, their boundaries pass through the points on the 28 occluding object closest to the original camera optical axis (i.e. in 2-D image representation, the image centre). As shown in Figure 3-10, a vector G is defined in the image plane along the shortest distance between the convex polygon that is enclosing the occluding object, and the centre of the image, C. Let A be the area of the convex polygon, and TV be the 2 dimensional projection of the desired camera direction of motion, then, N=G, if, CzA (3.14) N = -G, if, C M (3.15) N is a vector in 2D space. In order to compute the 3D vector that represents the camera motion, it is necessary to analyse the geometry of the camera-ROI-occlusion from another perspective. Camera G Moving Direction Figure 3-10: The dashed arrow shows the direction of the camera motion to avoid occlusion with minimum travel. Figure 3-11 shows a plane section along the axis of the camera viewing cone containing the 2 dimensional projected direction of the camera motion, N. Let A B be the diameter of the viewing cone at the region of interest. CP is the object distance and the optical axis of 29 the camera located at C. The thick line IJ represents the occluding object. The line A J is extended to a point E so that CE is parallel to AB. If camera is moved to point E, the occlusion can be avoided. Therefore, CE is the 2D camera direction of motion N, shown in Figure 3-10. It is not, however, the shortest distance. As mentioned earlier, the new camera location should be at least on the line AE to avoid the occlusion. By selecting point K to be on AE where C K is normal to AE, C K is the optimal 3D camera direction of motion vector that defines the shortest distance to avoid occlusion. A P B C E Figure 3-11: Diagram for computation of the minimum camera motion. Next, it is necessary to verify that C K can be computed with the information in hand. In Figure 3-11, one can construct a line FH passes point J and parallel to AB, so that for triangles ACG and ACE, CE AC CG ,, = = (3.16) HJ AH FG Where, 30 FG = CG- CF (3.17) Therefore, CE = HJ-CG CG-CF (3.18) C G is the known object distance, i.e. D, such in Equation (3.1). C F is the distance from the camera to the occlusion object. H J represents the distance from the point on the occlusion object that is closest to the centre of the image to the edge of the image cone at the depth of C F . Using the calibration data generated as described in Chapter 2, the system relates the object pixel size and the actual object size using the scale factor M , thereby, finding C F and H J , in turn, finding C E . Once C E is known, in triangle CEK, the length of C K can be found by Equation (3.19). 3.6 3D Visibility Constraint During all the above computation, one additional constraint is used for maintaining visual accuracy. In order to maintain 3D accuracy using two cameras, the optical axes of the cameras must be greater than 40 degrees and less than 140 degree [2]. Therefore, an algorithm is used to ensure that cameras are positioned within this requirement. The algorithm first checks the original camera relative positions. It then tracks the location changes to each camera and its corresponding relative position changes. In case if the camera movements result in exceeding the requirement, it computes an additional movements for the cameras that will keep the camera within the required limit and stay occlusion-free. The implementation and testing of the overall algorithm and system is discussed in the following chapter CK = CE- sm{ZCEK) (3.19) 31 3.7 Summary In this chapter, each camera placement constraint has been computed theoretically, namely, focus, field of view (FOV), resolution, and visibility (occlusion-free). Since the proposed method is unique for camera placement computation with zoom lenses, it is especially advantageous in industrial settings where cameras can only be placed in limited areas. Al l of the above algorithms are written in MATLAB. The code is included in Appendix D. The next chapter presents the implementation of the Chapters 2 and 3 along with experimental results. 32 Chapter 4 E X P E R I M E N T S A N D R E S U L T S In this chapter, the experimental setup and the results of the experiments are presented. As discussed in the previous chapters, the system involves three stages: Calibration, Path Generation, and Camera Placement stage. The objective of the chapter is to evaluate the overall system performance by testing each stage of the system. The chapter commences with an overview of the system procedures, followed by the experimental setup, and then test results for each stage. A summary of the results for the system concludes the chapter. 4.1 Overview As discussed in Chapter 1, the objective of the system is to assist human operators to perform robotic training. The key is to reduce the number of times that the operator needs to enter the work-cell when training the robot for complex manoeuvres. The proposed system requires the user to enter the robotic work-cell no more than three times for one training operation. Figure 4-1 presents the system procedure as a flowchart: rectangles indicate user actions; round-corner rectangles are robot actions; and triangles are system actions. The objective of the experimental work is to validate the proposed approach and test the accuracy of the system. The following aspects of the system are investigated: in the calibration stage, camera location computation; in path generation stage, path point computation; and in the camera placement stage, camera location and settings computation. 33 Camera Location and Setting Info. User Identifies Any Occlusion from Imase Provided User Defines Region of Interest Using Medium System Computes' Camera Movement and Settings to Avoid Occlusion (1 65 3 n 63 n 9 EZ> «^ 69 ore fB J Figure 4-1: The flowchart shows the overall system procedure. 4.2 Experimental Setup The experimental setup comprises the following equipment: two CRS A465 Manipulators and their controllers; the Medium described in Chapter 2; two JVC TK1070U Colour RGB Cameras with 1:1.8 12.5:75 zoom lenses; and one Pentium III 650 desktop computer for image processing and other computation processing. 34 one of the cameras is mounted on a stationed tripod and the other is mounted on a second CRS manipulator to allow testing of the camera relocation algorithm. Figure 4-2 is a picture of the experimental setup and Figure 4-3 illustrates the configuration of the cameras relative to the monitored robot arm. Figure 4-2: The picture shows the experimental setup. The cameras are connected to the processing PC. A Matrox Morphis image acquisition graphic card is used to obtain images from the two cameras simultaneously. Both robot controllers and the PC are connected on the same network so that data can be shared by all three units. The connection setup is illustrated by Figure 4-4. 35 Figure 4-3: The schematic shows camera locations relative to the robot arm in experimental setup. Processing PC Robot Controllers Figure 4-4: The diagram shows the equipment connections. 4.3 System Calibration The first step is system calibration. This is the most important step of the entire procedure because its accuracy directly affects the accuracy of the rest of the stages. Without an 36 accurate calibration, the path cannot be generated accurately and the cameras cannot placed as required properly. Initially, the cameras are randomly placed outside of the work-cell. Their optical axes are set to point generally towards the work-cell. More than six hundred possible calibration points within the work-cell are generated located about 5 to 10 centimetres apart forming a 3D grid. The locations of these calibration points are numbered and registered in the robot controller for control purposes and also saved in a database for computation purposes. The calibration process computes six variables about the cameras, as listed in Chapter 2. They are three location variables, two orientation variables, and one scale factor variable for converting distance from pixel frame to actual physical length unit. Before the calibration process begins, the rough estimates of the cameras locations (obtained with a tape measure) and their orientations (estimated by eye) are obtained and provided. These estimates are used as initial input for the calibration process. The initial input for the scale factor is obtained from Equation (4.1) where d is the image distance and c is pixel size (obtained from the manufacturer's information). M=- (4.1) c Four different sets of calibration points were used to compute these six variables. Sets 1 through 4 have 24, 50, 75, and 100 points respectively. For the following experiments the initial estimates for the six camera variables for each round of calibration are listed in Table 4-1. The camera locations obtained after the system calibration are given in Table 4-2. 37 M <t> (deg) 9 (deg) X(mm) Y(mm) Z(mm) Camera 1 1100 -10 -110 100 -2200 1200 Camera 2 1350 100 -100 3500 700 1100 Table 4-1: Initial estimates for camera locations. M o> (deg) 0 (deg) X(mm) Y(mm) Z(mm) Camera 1 1037.5 -7.9 -103.1 29.84 -2067 984.95 Camera 2 1406 101.3 -100.3 3685.5 714.7 1019.8 Table 4-2: Computed camera locations (Calibration set 4). To test the accuracy of the calibrated camera locations, 15 test points were generated and their location and orientation information saved into the robot controller. The manipulator is programmed to carry the medium to these 15 test points and the camera system computes the location and orientation of these points. By comparing the point information calculated by the system with the corresponding point information commanded to the controller, the accuracy of the camera location information is evaluated. If the computed point is exactly the same as the registered point, then the system is considered perfectly calibrated. Both the position and orientation of the test points are evaluated. Figure 4-5 shows the location error summary of each calibration set. For orientation, the axis-angle representation was used to compute the difference between the computed point orientation and the corresponding actual point orientation. Figure 4-6 shows the orientation error summary of each calibration set. 38 Position Error Summary (Unit: mm) I S T D 'Averag w/25 w/50 w/75 w/100 Figure 4-5 : The diagram shows the location error summary. Orientation Error Summary (Unit: degree) l S T D Averag w/25 w/50 w/75 w/100 Figure 4-6: The diagram shows the orientation error summary. As expected, the errors for both location and orientation decrease as the number of calibration point increases. In the presented case, the calibration set of 100 points provided satisfactory results. Therefore, the corresponding camera location information generated by using the 100 point set number 4 is used for the work presented in the remainder of the chapter, as given in Table 4-2. The error summary for the 15 test points using calibration set 4 is given in Table 4-3. 39 Location Error (mm) Orientation Error (deg) Maximum 10.89 14.92 Minimum 4.68 2.42 Average 6.9 5.46 Std Dev. 2.24 4.28 Table 4-3: Error summary of 15 test points using calibration set 4 (100 calibration points). As seen in Table 4-3, the maximum error for a point is about 1 cm, which, for general trajectory motion teaching, as discussed in the following section, is reasonable. However, for fine motion teaching, user intervention via camera display of the region of interest (and teach pendant control) would still be required, as discussed in section 4.5. 4.4 Path Generation As mentioned in Section 4.3, the accuracy of path generation is closely related to the accuracy of the calibration process. In fact, because path generation is a process of computing path point location and orientation information, it is exactly the same as the experimental procedure of the calibration stage with the exception that instead of computing the test points, it is now the path points that are computed. In addition, instead of robot holding the medium, the user holds the medium to represent the path points that are to be observed and computed. When user is holding the medium to represent point, it is impossible to compare the location and orientation accuracy numerically since there is no numerical value to compare to. However, because the computation process is the same, the accuracy of the calibration process reflects the accuracy of the path generation procedure. This application and accuracy of the path generation process can be shown visually. If the medium is shown as having exact locations and orientations in the user-holding image and the robot-play-back image, the path generation procedure is considered perfect. Figure 4-7 shows an example of images from one of the path points that is computed, 40 where images (a) and (b) are the path point demonstrated by the user from two camera views, and image (c) and (d) are the robot play-back path point. 4.5 Camera Placement The four camera placement constraints are focus, resolution, field of view, and 2D visibility. In addition, there is the 3D Visibility constraint that ensures that the 2 camera system achieves the best relative location and orientation combination for 3D localization and visualization purposes. For individual camera, as presented in Chapter 3, focus, resolution, and field of view are performed simultaneously. Because user input is required for incorporating the 2D visibility constraint, it is performed separately. Since only one of the cameras is mobile, 41 the test is done only on the mobile camera. The location parameters of this camera are shown by Table 4-2. Originally, the lens has the following optical settings: Aperture Focal Length Object Distance 8 12.5 oo Table 4-4: Original Camera Lens Optical Settings. At these settings and this location, the image obtained by the camera is shown in Figure 4-8. The region of interest (ROI), partially seen in Figure 4-8, is set with indication points marked by black crosses and their coordination information is summarized in Table 4-5. Figure 4-8: The image is the original view before running camera placement algorithm. The system first performs camera placement routine with focus, resolution, and field of view (FOV) constraints. The resulting camera location and optical settings are listed in Table 4-6. The corresponding view is shown by Figure 4-9. One can note that, the point on the box show in Figures 4-8 and 4-9 is considered the centre point or target point. 42 X Y Z P t l 520 200 200 Pt2 730 100 100 Pt3 570 0 100 Pt4 300 300 200 Pt5 470 460 200 Table 4-5: ROI points coordinates in world space. Pan Tilt X (mm) Y (mm) Z (mm) Aperture Focal Object Distance (deg) (deg) Length (mm) -8.8 -114.4 29.84 -2067 984.95 8 35.43 2350 Table 4-6: Camera position and optical settings after focus, field of view, and resolution adjustment. Figure 4-9: The image is taken after adjustments to optimized focus, field of view, and resolution. To examine the effectiveness of the algorithm, each constraint is individually analysed. From the FOV perspective, the original image, Figure 4-8, is compared with the view 43 after adjustment, Figure 4-9. From visual inspection, the ROI is included in entirely in the adjusted view whereas the lower part of the ROI is missing from the original image. In addition, the ROI is clearly more centred in the adjusted view than the original image. Using the analysis presented in Chapter 3, i.e. Equations (3.9 - 3.11), the FOV adjustment is examined. Table 4-7 shows the score comparison of Figure 4-8 and 4-9. It is obvious that the newly adjusted view has improved the FOV significantly. E / FOV Original View 0.5343 0 0.5343 New View 0.9474 0.9688 1.9162 Table 4-7: Comparison of FOV constraint measurement results. On the zoom lens, the zooming adjustment is done by adjusting the focal length and the focus adjustment is done by adjusting the object distance, D. Therefore, to evaluate the focus of the newly adjusted view, the following analysis is done. For the adjusted view, Figure 4-9, the D setting is set at 2350mm. Figure 4-10 and 4-11 are views with D = 2000mm and 3000mm respectively. 44 Figure 4-10: The image is taken at D = 2000mm. The three images, Figure 4-9, 4-10, and 4-11 are compared to examine which is more focused. The gradient of each image is analysed. The more focused image will have a bigger magnitude of the gradient. Figures 4-12 to 4-14 compare the magnitude of gradient of the three images. Setting D = 2350, the image is most focused. Figure 4-14: The diagram is the gradient graph for viewing image D = 3000mm. For resolution, no specific standard is applied - simply higher resolution is better. Therefore, the best resolution is automatically selected given the focus and FOV constraints are satisfied. Next, the 2D visibility constraint is examined. The view of Figure 4-9, settings as shown Table 4-6, is purposely blocked by an occluding object in 6 different ways, as shown by Figure 4-15. The system computes the six new camera locations and settings to avoid the occlusion. The new occlusion-free views are shown in Figure 4-16 with the new location and settings information listed in Table 4-8. 47 48 49 Pan (deg) Tilt (deg) X(mm) Y(mm) Z(mm) Focal Length (mm) Object Distance (mm) View (1) -2.228 -111.35 366.77 -2129 992.12 35.879 2437 View (2) -3.966 -111.31 298.54 -2118 987.27 35.249 2428 View (3) -3.389 -111.22 321.03 -2123 984.95 35.289 2431 View (4) -18.105 -110.95 -120.1 -1971 930.54 35.268 2429 View (5) -20.56 -110.88 -130.6 -1900 930.82 35.466 2443 View (6) -18.111 -111.24 -148.93 -1913 884.95 35.267 2429 Table 4-8: Occlusion-free camera locations and lens settings The evaluation criterion is designed such that the degree of occlusion is measured by the area of the image that is covered by any occluding object. Therefore, in the worst case, the entire image is occluded, and in the best case, occlusion-free. As shown in Figure 4-16, all computed new views are occlusion-free. It is also necessary at this point to check i f the 3D visibility constraint is satisfied. Note that the 3D visibility constraint states that the angle formed by the two optical axes of the two cameras must stay within 40 and 140 degrees. Therefore, each set of the six new camera location/orientation settings is examined. Table 4-9 shows the result that all the six cases satisfy the constraint. View Set 1 2 3 4 5 6 Angle of 98.88 Separation 100.44 99.96 112.93 115.04 112.81 (degree) Table 4-9: 3 D visibility test results 4.6 Summary In this chapter, the test results of the two-camera vision system are presented. First, the calibration process and the path generation process are tested. It is found that the accuracies of these processes are directly related to the number of calibration points used. 50 For the current settings, with 100 calibration points, the algorithm can calibrate the system to the accuracy of average location error less than 7 mm and average orientation error less than 5.5 degrees in axis angles. The calibration accuracy reflects the accuracy of the path generation process. Next, the camera placement process is evaluated. First, the position and settings of the cameras are adjusted in order to optimize the focus, field of view, and resolution constraints. Results show a significant improvement in terms of image quality from the original view to the optimized view. Then, using six test cases with each case the occluding object appearing at different positions of the image, the system successfully computes the occlusion-free camera position and lens settings. While performing the camera placement procedure, the system maintained the 3D visibility requirement for the 2 cameras. The supporting experiment algorithms and programs are listed in Appendix E. 51 Chapter 5 CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE W O R K 5.1 Conclusions The current robotic training process is time consuming and costly - due to constraints required to ensure the safety of human operators working within the robot's workspace. By reducing the time that the operator spends in the robot's workspace, both time and cost can be reduced. The work presented herein provides a computer vision based approach to assists human operators to perform accurate robotic training while reducing human operator's exposure within the robot work-cell. The vision system is a two-camera system that uses standard off-the-shelf equipments plus a specially designed medium. The system is self-calibrated so that the cameras can be initially placed around the robotic work-cell without requiring their known precise location. Once calibrated, the system can accurately detect the location and orientation of any point represented by the medium visible to both cameras. Using the medium, the operator can define a set of robot path points in one step with out repeated exiting and entering of the work-cell in order to "move and check" the robot's end-effector location at each path point. This approach significantly reduces the number of times the operator must enter and exit the work-cell. The final stage of the system is camera placement. During the this step, (if equipped with pan-tilt or another mobile platform) cameras are placed at the optimal positions so that the operators can perform fine-tuning of the manipulator position away from the work-cell via images provided by the cameras. Other than the above "industrial" contributions, this work also has the following technical contributions. First, the unique design of the medium is advantageous for location and orientation representations. Second, because the calibration process calibrates one camera at a time, it has no limitations in terms of number cameras that can be used in the system. 52 Finally, the camera placement algorithm includes zoom lenses where most of the existing camera placement techniques are for non-zoom lens cameras. 5.2 Recommendations for Future Work Further work includes investigating the effects of using more than 2 cameras. It can be expected that, increasing the number of cameras will increase the accuracy of path generation and camera placement. However, the complexity and cost of including further cameras must also be concerned. Of interest would be a computation of an optimal number of cameras given a workspace size, the level of occlusion objects in the workspace and the demanded accuracy for the planned task, as well as the cost in equipment and time for the initial system setup and calibration. It is known that the accuracy of the overall system heavily relies on the accuracy of the calibration process. A further step can be to investigate and possibly improve the calibration process. The presented work uses only the location information of the calibration points for system calibration process. One suggestion is to also use the orientation of the calibration points for accuracy improvement. Presently, the user is required to indicate using the medium all the path points for path generation. One suggestion is to apply robotic rule base intelligent system to further reduce the work load of the user. Specifically, the proposed intelligent system will populate a series of the path points i f given the start and end points. This means the user only need to indicate two points which further reduce the time and work load. An important next step for this system is a series of user trials to understand how users can realistically incorporate the vision system and the medium into "real-world" robot training tasks. It is proposed that initial trials would look at the process of calibrating, the cameras and teaching the robot path. Further trials could study the visual presentation of the task to the user, and provide input to the camera placement process. Results of this work would be useful in understanding how users both present and receive robot position information to and from the vision system. Using the result of these trails, an intelligent 53 system can be designed to improve the camera placement algorithm by performing camera placement depending on what the user prefers to see. R E F E R E N C E S 1. American National Standard for Industrial Robots and Robot Systems - Safety Requirements. Robotic Industries Association and American National Standards Institute, Inc. ANSI/RIA R15.06 - 1999. 2. Umesh A . Korde, "Camera-Space Manipulation with Natural Visual Information", Chapter 3. Ph.D Dissertation, University of Notre Dame, 1993. 3. www.braintech.com 4. http://www.pdc.kth.se/pdcttn/machinerv/hidSol/Applic/Vision/HPFIT/HPFIT_FR cport.html 5. E. J. Gonzalez-Galvan, et al. "An efficient multi-camera, multi-target scheme for the three-dimensional control of robots using uncalibrated vision". Transactions of Robotics and Computer Integrated Manufacturing 19, 2003: page 387-400. 6. Fuminori Saito. "Robot operation teaching method and apparatus", US patent: 6,587,752. 2002. 7. S. Y . Chen and Y . F. L i . "Automatic Sensor Placement for Model-Based Robot Vision". IEEE Transactions on Systems, Man, and Cybernetics - Part B : Cybernetics, Vol . 34, No 1. February 2004. 8. K . A . Tarabanis, P. K . Allen, and R.Y. Tsai, " A survey of sensor planning in computer vision," LEEE Transactions of Robotics and Automation, Vol . RA-11, No. 1, page 86-104, 1995. 9. C. K . Cowan and P. D. Kovesi. "Automatic sensor placement from vision task requirements". LEEE Transactions of Pattern Analysis and Machine Intelligence, Vol . 10, page 407-416, May 1988. 10. K . A . Tarabanis, R. Y . Tsai, and P. K . Allen, "The M V P sensor planning system for robotic vision tasks". IEEE Transactions on Robotics and Automation, Vol . RA-11, No. 1, page 72-85, 1995. 11. K . A . Tarabanis and R .Y. Tsai. "Computing occlusion-free viewpoints". In Proceedings of the Conference on Computer Vision and Pattern Recognition, page 802-807, 1992. 55 12. K. A . Tarabanis and R. Y . Tsai. "Sensor planning for robotic vision". In O. Khatib, J. Craig, and T. Lozano-Perez, editors, Robotics Research 2. MIT Press, 1992. 13. K . A . Tarabanis, R .Y. Tsai, and P.K. Allen. "Automated sensor planning for robotic vision tasks. In Proceedings of the International Conference of Robotics and Automation, pages 76-82, 1991. 14. E. Trucco et al. "Model-based planning of optimal sensor placements for inspection". LEEE Transactions on Robotics and Automation, Vol . RA-13, No. 2, page 182-194, 1997. 15. J. D. Foley et al. "Computer Graphics: Principles and Practice", 2 n d Edition in C. ADDISON W E S L E Y , 1996. 16. M . W. Spong and M . Vidyasagar. "Robot Dynamics and Control". JOHN WILEY & SONS, 1989. 17. D. Marr. "VISION: A Computational Investigation into the Human Representation and Processing of Visual Information". W. H . Freeman and Company, 1982. 18. D. Bachtold, M . Baumuller, and P. Brugger. "Stimulus-response compatibility in representational space". Transactions on Neuropsychologia, Vol . 36, No. 8, pages 731-735, 1998. 19. C. J. Worringham and D. B. Beringer. "Directional stimulus-response compatibility: a test of three alternative principles". Transactions on Ergonomics, Vol . 41, No. 6, pages 864-880, 1998. 20. M . Iacoboni, R. P. Woods, and J. C. Mazziotta. "Brain-Behavior Relationships: Evidence From Practice Effects in Spatial Stimulus-Response Compatibility". Transactions on Journal of Neurophysiology, Vol . 76, No. 1, July 1996. 21. T. Svoboda, D. Martinec, and T. Pajdla. " A Convenient Multi-Camera Self-Calibration for Virtual Environments" Presence: Teleoperators and Virtual Environments, 14(4), August 2005. To appear, MIT Press. 22. D. G. Aliaga and I. Carlbom. "Finding Yourself. IEEE Robotics and Automation Magazine, Vol . 11, No. 4, pages 53-61, December, 2004. 56 23. W. H . Press, et. al. "Numerical Recipes in C", 2 n d edition, Chapter 9. Cambridg 1992. 24. Z. Zhang. " A Flexible New Technique for Camera Calibration", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol . 22, No. 11, November 2000. Appendix A : System Calibration M A T L A B Program % This function calculates a camera position when u and v are previously % generated and are available in the workspace function CameraPosition = AddAngles2(ptdata, uv, guess) format long; Guess = guess; %Guess = [1.7; -1.8; 3500; 750; 1200]; %DChange = [0; 0; 0; 0; 0; 0]; Threshold = 2; count = 1; while Threshold > 1 % PHIChange(count, 1) = Guess(l,l); % thetaChange(count, 1) = Guess(2,l); % XChangefcount, 1) = Guess(3,1); % YChange(count, 1) = Guess(4,l); % ZChange(count, 1) = Guess(5,l); % sChange(count, 1) = Guess(6,l); %Guess = Guess + DChange; syms TM11 TM12 TM13 TM14 TM21 TM22 TM23 TM24 TM31 TM32 TM33 TM34 syms PHI theta Xcw Yew Zcw Xw Yw Zw Luv s TM11 =cos(PHI); TM12 = sin(PHI); TM13=0; TM14 = -(sin(PHI)*Ycw+cos(PHI)*Xcw); TM21 = -sin(PHI)*cos(theta); TM22 = cos(PHI)*cos(theta); TM23 = sin(theta); TM24 = sin(PHI)*cos(theta)*Xcw-cos(PHI)*cos(theta)*Ycw-sin(theta)*Zcw; TM31 = sin(PHI)*sin(theta); TM32 = -cos(PHI)*sin(theta); TM33 = cos(theta); TM34 = -(sin(PHI)*sin(theta)*Xcw-cos(PHI)*sin(theta)*Ycw+cos(theta)*Zcw); TranMatrix = [TM11 TM12 TM13 TM14; TM21 TM22 TM23 TM24; TM31 TM32 TM33 TM34; 000 1]; %The point to be calculated in world frame PointWorld = [Xw; Yw; Zw; 1]; %The point to be calculated in camera frame PointCamera = TranMatrix*PointWorld; % Overall formulation. NOTE: z coordinate of the point in camera space % is always assumed to be in positive, thereby no absolute value for % the denominator. Luv = s*(((PointCamera( 1,1 ))A2+(PointCamera(2, l))A2)A0.5)/((PointCamera(3,1 ))A2)A0.5; % Find the jacobian J = jacobian(Luv, [PHI theta Xcw Yew Zcw s]); %J = jacobian(Luv, [s]); % Sub in the guessed values PHI = Guess(l,l);%2.53; theta = Guess(2,l); %-1.571; Xcw = Guess(3,l); %2675; Yew = Guess(4,l); %3000 Zcw = Guess(5,l); s = Guess(6,l); % % PHI =1.7; % theta = -1.8; % Xcw = 3500; % Yew = 750; % Zcw =1200; % s= 1500; % Read in the collected data points CData = xlsread(ptdata); numdata = size(uv); RDMData = randperm(numdata); for i= 1 :numdata( 1,1) Xw = CData(RDMData(i),l); Yw = CData(RDMData(i),2); Zw = CData(RDMData(i),3); U = uv(RDMData(i),l); V = uv(RDMData(i),2); ifU~=0&V~=0 LuvGuessed = [subs(Luv)]; j = subs(J); LuvDesire = ((U-320)A2+(V-240)A2)A0.5; if i — 1 D = [LuvGuessed]; Jac = [j]; Dk = [LuvDesire]; else D = [D; LuvGuessed]; Jac = [Jac; j]; Dk=[Dk; LuvDesire]; end end end Jt = Jac.'; Jinv = inv(Jt*Jac); Jseudo = Jinv*Jt; %calculating change in guessed solution DU = Dk-D; DChange = Jseudo*DU; Threshold = sum(DChange.A2, 1) count = count + 1; if Threshold < 1 bar(DU) break end Guess = Guess + DChange; % plot(PHIChange, '-.gd'); % hold on % plot(thetaChange, '-r*'); % plot(sChange/1000, ':mo'); % plot(XChange/1000,'-b+'); % plot(YChange/1000, '-.cs'); % plot(ZChange/1000, '~kA'); % if count >=20, break, end % end %hold off CameraPosition = Guess; Appendix B : Medium Identification M A T L A B Programs B.1: Main Identification Program % This function identify where the clicker is in an image % It uses the following subfunctions: % findrange.m % noisefilter.m % NOTE: This is used for calibration process. Another program similar to % this is used for path generation function Estlnt = clickerlDCalfimagename, num) if nargin = 1 %scene = imagename; num = 1; end fork=l:num if nargin == 1; scene = sprintf('z:/Tao on Bitter/%s', imagename); %scene = sprintf('H:/Thesis work/matlab/matlab files/%s', imagename); else scene = sprintf('z:/Tao on Bitter/%s%d.tif, imagename, k); % scene = sprmtf('H:/Thesis work/matlab/matlab files/%s%d.tif, imagename, k); end % Read in the image I = imread(scene); k % Convert image to double II = double(I)/255; %Determine if the image intensity is good enough to process. If not adjust %filtering threshold accordingly. maxrange = findrange(Il); if maxrange <= 0.91 %With the threshold at 0.08 increase from the max intensity range, the %image can be successfully filtered. ftthrhd = maxrange + 0.08; 12 = 11; % Seperate the colours for i=l:480 forj=l:640 if((I2(ij,3)>=maxrange)&(I2(ij,2)<=l)&(I2(i,j,l)<=l)) if (I2(i j,3)>I2(i j,2)&I2(i j,3)>I2(i j , 1)) B(iJ)=l; else B(i,j)=0; end else B(i,j)=0; end if((I2(ij,2)>=ftthrhd)&(I2(ij,l)<=l)&(I2(i,j,3)<=l)) if(I2(ij,2)>I2(ij,3)&I2(ij;2)>I2(i,j,l)) G(iJ)=l; else G(ij)=0; end else G(i,j)=0; end if((I2(ij,l)>=ftthrhd)&(I2(ij,2)<=l)&(I2(ij,3)<=l)) if(I2(Lj,l)>I2(ij,2)&I2(ij,l)>I2(iJ>3)) R(i,j)=i; else R(i,j)=0; end else R(ij)=0; end end end % imshow(I) % figure, imshow(B) % figure, imshow(G) % figure, imshow(R) % The result images need to be modified so the noises are gone. BB = noisefilter(B); GG = noisefilter(G); RR = noisefilter(R); [BL, Bnum]=bwlabel(BB); [GL, Gnum]=bwlabel(GG); [RL, Rnum]=bwlabel(RR); CLR = BB + GG + RR; if nargin = 1 figure, imshow(CLR); end if (Bnum > 0)&(Gnum > 0)&(Rnum > 0) %Find the centroid and orientation of each object in order to calculate %their intersections. StatsB = regionprops(BB, 'centroid', 'orientation'); StatsG = regionprops(GG, 'centroid', 'orientation'); StatsR = regionprops(RR, 'centroid', 'orientation'); OrientBtoG = abs(StatsB.Orientation - StatsG.Orientation); OrientGtoR = abs(StatsG.Orientation - StatsR.Orientation); OrientRtoB = abs(StatsR.Orientation - StatsB.Orientation); DistBtoG = sqrt((StatsB .Centroid( 1 )-StatsG.Centroid( 1 ))A2+(StatsB .Centroid(2)-StatsG.Centroid(2))A2); DistBtoR = sqrt((StatsB.Centroid( 1 )-StatsR.Centroid( 1 ))A2+( StatsB .Centroid(2)-StatsR.Centroid(2))A2); DistRtoG = sqrt((StatsR.Centroid( 1 )-StatsG.Centroid( 1 ))A2+(StatsR.Centroid(2)-StatsG.Centroid(2))A2); %if nargin ~= 1 % first test out if the filtered colours are close enough % to be the clicker itself. If they are too far from each % other, it may not be the clicker light if DistBtoG >= 250 || DistBtoR >= 250 || DistRtoG >= 250 EstInt(k,l) = 0; Estlnt(k,2) = 0; % Then test if the orientation of any 2 lights are % colinear. If they are, the solution may not be right % so we choose only the intersection of the 2 which are % not colinear. elseif OrientBtoG <= 1 GR = [-tan(StatsG.Orientation*pi/180), -1; -tan(StatsR.Orientation*pi/180), -1]; bGR = [-StatsG.Centroid(2)-(tan(StatsG.Orientation*pi/180)*StatsG.Centroid(l)); -StatsR.Centroid(2)-(tan(StatsR.Orientation*pi/180)*StatsR.Centroid(l))]; intersectionGR = GR\bGR; Estlnt(k,l) = intersectionGR(l); Estlnt(k,2) = intersectionGR(2); elseif OrientGtoR <= 1 BR = [-tan(StatsB.Orientation*pi/180), -1; -tan(StatsR.Orientation*pi/180), -1]; bBR = [-StatsB.Centroid(2)-(tan(StatsB.Orientation*pi/l 80)*StatsB.Centroid(l)); -StatsR.Centroid(2)-(tan(StatsR.Orientation*pi/180)*StatsR.Centroid(l)) ]; intersectionBR = BR\bBR; Estlnt(k,l) = intersectionBR(l); Estlnt(k,2) = intersectionBR(2); elseif OrientRtoB <= 1 GB = [-tan(StatsG.Orientation*pi/180), -1; -tan(StatsB.Orientation*pi/180), -1]; bGB = [-StatsG.Centroid(2)-(tan(StatsG.Orientation*pi/l 80)*StatsG.Centroid( 1)); -StatsB.Centroid(2)-(tan(StatsB.Orientation*pi/l 80)*StatsB.Centroid( 1)) ]; intersectionGB = GB\bGB; Estlnt(k, 1) = intersectionGB( 1); Estlnt(k,2) = intersectionGB(2); %Finally if none of the above occurs, go with the %regular route. %Form the lines and solve the system for their intersections else BG = [-tan(StatsB.Orientation*pi/180), -1; -tan(StatsG.Orientation*pi/180), -1]; GR = [-tan(StatsG.Orientation*pi/180), -1; -tan(StatsR.Orientation*pi/180), -1]; RB = [-tan(StatsR.Orientation*pi/180), -1; -tan(StatsB.Orientation*pi/180), -1]; %Err = -StatsB.Centroid(2)+(tan(StatsB.Orientation*pi/180)*StatsB.Cento bBG = [-StatsB.Centroid(2)-(tan(StatsB.Orientation*p)7180)*StatsB.Centroid(l)); -StatsG.Centroid(2)-(tan(StatsG.Orientation*pi/l 80)*StatsG.Centroid( 1)) ]; bGR = [-StatsG.CentToid(2)-(tan(StatsG.Orientation*pi/180)*StatsG.Centroid(l)); -StatsR.Centroid(2)-(tan(StatsR.Orientation*pi/180)*StatsR.Centroid(l)) ]; bP3 = [-StatsR.Centroid(2)-(tan(StatsR.Orientation*pi/180)*StatsR.Centroid(l)); -StatsB.Centroid(2)-(tan(StatsB.Orientation*pi/180)*StatsB.Centroid(l)) ]; % The solution o f the above 3 systems of equations are the intersections intersectionBG = BG\bBG; intersectionGR = GR\bGR; intersectionRB = RB\bRB; %Take the average of the 3 calculated intersections to get the estimated %actural intersection Estlnt(k, 1 )= (intersectionBG( 1 )+intersectionGR( 1 )+intersectionRB( 1 ))/3; if Estlnt(k, 1 )>640|EstInt(k, 1 )<0 EstInt(k,l)=0; end Estlnt(k,2)= (intersectionBG(2)+mtersectionGR(2)+intersectionRB(2))/3; if EstInt(k,2)>480|EstInt(k,2)<0 Estlnt(k,2)=0; end %Efere, for each case, I need to introduce the rule base to adjust the %robot for getting sufficient image to be processed further, end if Estlnt(k,l) ~= 0 & Estlnt(k,2) ~= 0 Estlnt(k,3) = StatsR.Orientation; Estlnt(k,4) = StatsG.Orientation; Estlnt(k,5) = StatsB. Orientation; Estlnt(k,6) = 0; end % else % warning = 'The image is sufficent enough to be processed' % warning2 = 'move the clicker and retake the image' elseif Bnum= 0 Warning = 'Cannot see blue.' if Gnum ~= 0 & Rnum ~= 0 % if two light are visible, then perform 2 light intersection StatsG = regionprops(GG, 'centroid', 'orientation'); StatsR = regionprops(RR, 'centroid', 'orientation'); DistRtoG = sqrt((StatsR.Centroid( l)-StatsG.Centroid( l))A2+(StatsR.Centroid(2)-StatsG.Centroid(2))A2); if DistRtoG >= 250 EstInt(k,l) = 0; Estlnt(k,2) = 0; else GR = [-tan(StatsG.Orientation*pi/180), -1; -tan(StatsR.Orientation*pi/180), -1]; bGR = [-StatsG.Centroid(2)-(tan(StatsG.Orientation*pi/l 80)*StatsG.Centroid( 1)); -StatsR.Centroid(2)-(tan(StatsR.Orientation*pi/l80)*StatsR.Centroid( 1)) ]; intersectionGR = GR\bGR; Estlnt(k,l) = intersectionGR(l); Estlnt(k,2) = intersectionGR(2); end else EstInt(k,l) = 0; Estlnt(k,2) = 0; warning = 'Image is missing a colour, not good enough to process' end if Estlnt(k,l) ~= 0 & Estlnt(k,2) ~= 0 Estlnt(k,3) = StatsR. Orientation; Estlnt(k,4) = StatsG.Orientation; Estlnt(k,5) = 0; Estlnt(k,6) = 3; end elseif Gnum == 0 Warning = 'Cannot see green.' if Bnum ~= 0 & Rnum ~= 0 StatsB = regionprops(BB, 'centroid', 'orientation'); StatsR = regionprops(RR, 'centroid', 'orientation'); DistRtoB = sqrt((StatsR.Centroid(l)-StatsB.Centroid(l))A2+(StatsR.Centroid(2)-StatsB.Centroid(2))A2); ifDistRtoB>=250 . EstInt(k,l) = 0; Estlnt(k,2) = 0; else BR = [-tan(StatsB.Orientation*pi/180), -1; -tan(StatsR.Orientation*pi/180), -1]; bBR = [-StatsB .Centroid(2)-(tan(StatsB .Orientation*pi/180)*StatsB .Centroid( 1)); -StatsR.Centroid(2)-(tan(StatsR.Orientation*pi/180)*StatsR.Centroid(l))]; intersectionBR = BR\bBR; Estlnt(k,l) = intersectionBR(l); Estlnt(k,2) = intersectionBR(2); end else EstInt(k,l) = 0; Estlnt(k,2) = 0; warning = 'Image is missing a colour, not good enough to process' end if Estlnt(k,l) ~= 0 & Estlnt(k,2) ~= 0 Estlnt(k,3) = StatsR.Orientation; Estlnt(k,4) = 0; Estlnt(k,5) = StatsB.Orientation; Estlnt(k,6) = 2; end elseif Rnum = 0 Warning = 'Cannot see red.' if Gnum ~= 0 & Bnum ~= 0 StatsG = regionprops(GG, 'centroid', 'orientation'); StatsB = regionprops(BB, 'centroid', 'orientation'); DistBtoG = sqrt((StatsB.Centroid(l)-StatsG.Centroid(l))A2+(StatsB.Centroid(2)-StatsG.Centroid(2))A2); ifDistBtoG>=250 EstInt(k,l) = 0; Estlnt(k,2) = 0; else GB = [-tan(StatsG.Orientation*pi/180), -1; -tan(StatsB.Orientation*pi/180), -1]; bGB = [-StatsG.Centroid(2)-(tan(StatsG.Orientation*pi/l 80)*StatsG.Centroid( 1)); -StatsB.Centroid(2)-(tan(StatsB.Orientation*pi/180)*StatsB.Centroid(l)) ]; intersectionGB = GB\bGB; Estlnt(k,l) = intersectionGB(l); Estlnt(k,2) = intersectionGB(2); end else Estlnt(k,l) = 0; Estlnt(k,2) = 0; warning = 'Image is missing a colour, not good enough to process' end if Estlnt(k,l) ~= 0 & Estlnt(k,2) ~= 0 Estlnt(k,3) = 0; Estlnt(k,4) = StatsG.Orientation; Estlnt(k,5) = StatsB.Orientation; Estlnt(k,6)= 1; end end else Warning = 'The image is too bright to be processed.' Action = 'Please adjust the lighting condition and restart.' return end end B.2: Supporting Program - Finding image intensity information %This function is to find the image intensity profile of the image to be %processed. It takes in the image name in double colour format and return %fhe max end of the intensity range. function maxrange = findrange(imagename) imagebw = rgb2gray(imagename); histo = imhist(irnagebw, 100); counting = 0; fori=l:100 if (histo(i, 1)<100 & counting = 1) maxrange = i/100; counting = 2; end if (histo(i,l)>100 & counting = 0) counting = 1; minrange = i/100; end end B.3: Supporting Program - Image noise filter % This function filters out the noises in an image so only the neon % itself is indicated. function noisefree = noisefilter(B) [L,num] = bwlabeln(B); Stats = regionprops(L,'filledimage','filledarea'); maxArea = 0; MAIndex = 0; for i = 1 :num if Stats(i).FilledArea >= maxArea & Stats(i).FilledArea > 40 maxArea = Stats(i).FilledArea; MAIndex = i; end end BB = zeros(480,640); if MAIndex >0 [r,c] = find(L=MAIndex); rc = [r, c]; SZ = size(rc); fork= 1:SZ(1) BB(r(k),c(k))=l; end noisefreeO = imfill(BB); noiseStats = regionprops(noisefreeO, 'eccentricity'); if noiseStats.Eccentricity >= 0.85 noiseStats.Eccentricity; noisefree = noisefreeO; else noiseStats.Eccentricity; noisefree = BB; end else noisefree = BB; end Appendix C : Path Generation M A T L A B Programs C.1: Main Path Generation Program % This function calculates the location of the path point. It saves the % information into a text file for the controller to use. % Subfunctions used: dotpro.m, ClikerlD.m, ClickerOrient2.m, saveAsCSV.m function xyz = ptsloc(imagenamel, imagename2, num, CLocationl, CLocation2) syms TM11 TM12 TM13 TM14 TM21 TM22 TM23 TM24 TM31 TM32 TM33 TM34 syms PHI theta Xcw Yew Zcw Xw Yw Zw Luv s U V U l VI U2 V2 % Xcw Yew and Zcw are the 3D location of the camera. Xw Yw and Zw are % coordinates of the point in world frame. TM11 =cos(PHI); TM12 = sin(PHI); TM13 =0; TM14 = -(sin(PHI)*Ycw+cos(PHI)*Xcw); TM21 = -sin(PHI)*cos(theta); TM22 = cos(PHI)*cos(theta); TM23 = sin(fheta); TM24 = sin(PHI)*cos(theta)*Xcw-cos(PHI)*cos(theta)*Ycw-sin(theta)*Zcw; TM31 = sin(PHI)*sin(theta); TM32 = -cos(PHI)*sin(theta); TM33 = cos(theta); TM34 = -(sin(PHI)*sm(meta)*Xcw-cos(PHI)*sm(meta)*Ycw+cos(theta)*Zcw); TranMatrix = [TM11 TM12 TM13 TM14; TM21 TM22 TM23 TM24; TM31 TM32 TM33 TM34; 000 1]; % For camera 1 PHI = CLocationl (1,1); theta = CLocationl(2,l); Xcw = CLocationl (3,1); Yew = CLocationl (4,1); Zcw = CLocationl(5,l); s = CLocationl (6,1); TranMatrix 1 = subs(TranMatrix); M l =s; % For camera 2 PHI = CLocation2(l,l); theta = CLocation2(2,l); Xcw = CLocation2(3,l); Yew = CLocation2(4,l); Zcw = CLocation2(5,l); s = CLocation2(6,l); M2 = s; TranMatrix2 = subs(TranMatrix); % Using Orthogonality to detennine which U2 or V2 is better used with U l % and VI. EqSelect = dotpro(CLocationl, CLocation2); for i = 1 :num i % First run function clickerlD to get the U V information of the image scenel = sprintf('%d.tif, i); si = strcat( imagename 1, scenel); points 1 = clickerlD(sl); scene2 = sprintf('%d.tif, i); s2 = strcat(imagename2,scene2); points2 = clickerID(s2); if points 1(1,1:2) == [0 0] | points2( 1,1:2) == [0 0] warning = 'Please reposition the clicker and take this image again' i xyz(i,l) = 463; xyz(i,2) = 0; xyz(i,3) = 631; xyz(i,4) = 0; xyz(i,5) = 0; xyz(i,6) = 45; elseif pointsl(l,1:2) ~= [0 0] & points2( 1,1:2) ~= [0 0] ul =pointsl(l,l); vl = points 1(1,2); u2 = points2(l,l); v2 = points2(l,2); Eql = (ul-320)*TranMatrixl(3,:)-Ml*TranMatrixl(l,:); Eq2 = (vl-240)*TraiiMatrixl(3,:)-Ml*TraiiMatrixl(2 Eq3 = (u2-320)*TrarMatrix2(3,:)-M2*TranMatrix2(l,:); Eq4 = (v2-240)*TranMatrix2(3,:)-M2*TranMatrix2(2,:); if EqSelect = 1 || EqSelect = 2 LeftSide = [Eql(l,l) Eql(l,2) Eql(l,3); Eq2(l,l)Eq2(l,2) Eq2(l,3); Eq4(l,l)Eq4(l,2) Eq4(l,3)]; RightSide = [-Eql(l,4); -Eq2(l,4); -Eq4(l,4)]; LSinv = inv(LeftSide); PTWLocl = LSinv*RightSide; PTWLoc = PTWLocl'; %xyz(i,:)=[PTWLoc(l) PTWLoc(2) PTWLoc(3)] PTWOri = ClickerOrient2(CLocationl, CLocation2, PTWLoc, points 1, points2); xyz(i,:) = [PTWLoc(l) PTWLoc(2) PTWLoc(3) PTWOri(l) PTWOri(2) PTWOri(3)]; else LeftSide = [Eql(l.l) Eql(l,2) Eql(l,3); Eq2(l,l)Eq2(l,2)Eq2(l,3); Eq3(l,l)Eq3(l,2) Eq3(l,3)]; RightSide = [-Eql(l,4); -Eq2(l,4); -Eq3(l,4)]; LSinv = inv(LeftSide); PTWLocl =LSinv*RightSide; PTWLoc = PTWLocl'; PTWOri = ClickerOrient2(CLocationl, CLocation2, PTWLoc, pointsl, points2); xyz(i,:) = [PTWLoc(l) PTWLoc(2) PTWLoc(3) PTWOri(l) PTWOri(2) PTWOri(3)]; %xyz(i,:)=[PTWLoc(l) PTWLoc(2) PTWLoc(3)] end end end saveAsCSV(xyz, num, 'z:\ptslocation.txt'); C.2: Supporting Programs - Dot product evaluation % This function is to determine after transformation, which axis (u2 or v2) % is best to combine with ul and vl for ptslocation calculation. In this % program, only the max dot product is found. The logical selection is done % in ptslocation.m function rslt = dotpro(Cl Location, C2Location) x = [l;0; 0]; y = [0;l;0]; PHI1 =ClLocation(l,l); thetal = ClLocation(2,l); PHI2 = C2Location(l,l); theta2 = C2Location(2,l); Rphi_l = [cos(PHIl) -sin(PHIl) 0; sin(PHIl) cos(PHIl) 0; 0 0 1]; Rtheta_l = [1 0 0; 0 cos(thetal) -sin(thetal); 0 sin(thetal) cos(thetal)]; Rphi_2 = [cos(PHI2) -sin(PHI2) 0; sin(PHI2) cos(PHI2) 0; 0 0 1]; Rtheta_2 = [1 0 0; 0 cos(theta2) -sin(theta2); 0 sin(theta2) cos(theta2)]; ul =Rphi_l*Rtheta_l*x; vl = Rphi_l*Rtheta_l*y; u2 = Rphi_2*Rfheta_2*x; v2 = Rphi_2*Rtheta_2*y; udu = dot(ul,u2); vdu = dot(vl,u2); udv = dot(ul,v2); vdv = dot(vl,v2); together = [abs(udu) abs(vdu) abs(udv) abs(vdv)]; [C, I] = max(together); rslt = I; C.3: Supporting program - Medium orientation computation program function Orient = ClickerOrient2(CamlLocation,Carn2Location,clickerLocation,UVinfol,UVinfo2) % Get the camera orientation information PHI1 =CamlLocation(l,l); PHI2 = Cam2Location(l,l); thetal = CamlLocation(2,l); theta2 = Cam2Location(2,l); % Compute the distance from Camera to the clicker then calculate the offset % angles to adjust in order to get the orientation of the projection plane DistCaml ToClicker = ((Caml Location(3,1 )-clickerLocation( 1,1)) A2+(Caml Location(4,1 )-clickerLocation( 1,2))A2+(Caml Location(5,1 )-clickerLocation( 1,3))A2)A0.5; DistCam2ToClicker = ((Cam2Location(3,1 )-clickerLocation( 1,1)) A2+(Cam2Location(4,1 )-clickerLocation( 1,2))A2+(Cam2Location(5, l)-clickerLocation( 1,3))A2)A0.5; OffSetlX = (TJVinfol(l,l)-320)*DistCamlToClicker/CamlLocation(6,l); OffSetlPan = asin(OffSetlX/DistCamlToClicker); OffSet2X = (UVinfo2(l,l)-320)*DistCam2ToClicker/Cam2Location(6,l); OffSet2Pan = asin(OffSet2X/DistCam2ToClicker); OffSetlY = (UVinfol(l,2)-240)*DistCamlToClicker/CamlLocation(6,l); OffSetlTilt = asin(OffSetlY/DistCamlToClicker); OffSet2Y = (UVinfo2(l,2)-240)*DistCam2ToClicker/Cam2Location(6,l); OffSet2Tilt = asin(OffSet2Y/DistCam2ToClicker); %ifPHIl<0 % PHI1 =PHIl+OffSetlPan; % elseif PHI 1 > 0 PHI1 =PHI1 - OffSetlPan; % end % if thetal < 0 % thetal = thetal + OffSetlTilt; % elseif thetal >0 thetal = thetal - OffSetlTilt; % end ModCamlLoc = Caml Location; ModCamlLoc(l)=PHIl; ModCaml Loc(2)=theta 1; % if PHI2 < 0 % PHI2 = PHI2+OffSet2Pan; % elseif PHI2>0 PHI2 = PHI2 - OffSet2Pan; % end %iftheta2<0 % theta2 = theta2 + OffSet2Tilt; % elseif theta2 > 0 theta2 = fheta2 - OffSet2Tilt; % end ModCam2Loc = Cam2Location; ModCam2Loc(l) = PHI2; ModCam2Loc(2) = theta2; % The following two matrices are the two rotational matrices RTransl = [cos(PHIl) sin(PHIl) 0; -sin(PHIl)*cos(thetal) cos(PHIl)*cos(thetal) sin(thetal); sin(PHIl)*sin(thetal) -cos(PHIl)*sin(thetal) cos(thetal)]; RTrans2 = [cos(PHI2) sin(PHI2) 0; -sin(PHI2)*cos(theta2) cos(PHI2)*cos(theta2) sin(theta2); sin(PHI2)*sin(theta2) -cos(PHI2)*sin(theta2) cos(theta2)]; syms Roll Pitch Yaw cY cP cR sY sP sR cPl sPl % This is the Roll_Pitch_Yaw transformation RPY = [cY*cP -sY*cR+cY*sP*sR sY*sR+cY*sP*cR; sY*cP cY*cR+sY*sP*sR -cY*sR+sY*sP*cR; -sP cP*sR cP*cR]; % Now represent the clicker in camera space CamlToClicker = RTransl *RPY; Cam3Dl = CamlToClicker*[l 0 0; 0 1 0; 0 0 1]; Cam2ToClicker = RTrans2*RPY; Cam3D2 = Cam2ToClicker*[l 0 0; 0 1 0; 0 0 1]; % compute the orientation of each light in 2D image plane OrientBl = pi*UVinfol(l,5)/180; OrientGl = pi*UVinfol(l,4)/180; OrientRl =pi*UVinfol(l,3)/180; OrientB2 =pi*UVinfo2(l,5)/180; OrientG2 = pi*UVinfo2(l,4)/180; OrientR2 = pi*UVinfo2(l,3)/180; % Adjust the imaginary image orientation according to the offset angles TBI = tan(OrientBl)*cos(abs(OffSetlTilt))/cos(abs(OffSetlPan)); TB2 = tan(OrientB2)*cos(abs(OffSet2Tilt)ycos(abs(OffSet2Pan))^  TGI = tan(OrientGl)*cos(abs(OffSetlTilt))/cos(abs(OffSetlPan)); TG2 = tan(OrientG2)*cos(abs(OffSet2Tilt)ycos(abs(OffSet2Pan)) TR1 = tan(OrientRl)*cos(abs(OffSetlTilt))/cos(abs(OffSetlPan)); TR2 = tan(OrientPs2)*cos(abs(OffSet2Tilt)ycos(abs(Offfi % Setup the equations of relationship Eq 11 = Cam3D 1 (1,1 )*TB 1 +Cam3D 1 (2,1); Eql2 = Cam3Dl(l,2)*TGl+Cam3Dl(2,2); Eql3 = Cam3Dl(l,3)*TRl+Cam3Dl(2,3); Eq21 = Cam3D2(l,l)*TB2+Cam3D2(2,l); Eq22 = Cam3D2(l,2)*TG2+Cam3D2(2,2); Eq23 = Cam3D2(l,3)*TR2+Cam3D2(2,3); % First calculate Yaw angle using the first equation from each % camera. If you look at it, the two equations only has Yaw and % Pitch as unknowns. So if you move sP term to one side and the 72 % rest on the other side, divide the two equations, you are left % with only cY and sY. The ration of sY/cY gives you tan(Yaw) CoeSPl =diff(Eqll,sP); CoeSP2 = diff(Eq21,sP); Eql 1A = (Eql l-CoeSPl*sP)/cP; Eq21A = (Eq21-CoeSP2*sP)/cP; EqllB = simple(EqllA); Eq21B = simple(Eq21 A); EqlC = Eql lB*CoeSP2-Eq21B*CoeSPl; CoeCY = diff(EqlC, cY); CoeSY = diff(EqlC, sY); YawRl = eval(atan(-CoeCY/CoeSY)); ifYawRl >0 YawR2 = YawRl - pi; elseif YawRl <0 YawR2 = YawRl + pi; end % YawDl = YawRl *180/pi % YawD2 = YawR2*180/pi % Then sub Yaw back to one of the above equations to find Pitch. cY = cos(YawRl); sY = sin(YawRl); Eql lC = subs(Eqll); cY = cos(YawR2); sY = sin(YawR2); EqllD = subs(Eqll); CoeCPl =diff(EqllC, cP); CoeSPl =diff(EqllC, sP); CoeCP2 = diff(Eql ID, cP); CoeSP2 = diff(EqllD, sP); PitchRl = eval(atan(-CoeCPl/CoeSPl)); if PitchRl > 0 PitchR2 = PitchRl - pi; elseif PitchRl <0 PitchR2 = PitchRl + pi; end PitchR3 = eval(atan(-CoeCP2/CoeSP2)); if PitchR3 > 0 PitchR4 = PitchR3 - pi; elseif PitchR3 < 0 PitchR4 = PitchR3 + pi; end % PitchDl = PitchRl*180/pi % PitchD2 = PitchR2*180/pi % PitchD3 = PitchPv3*180/pi % PitchD4 = PitchR4* 180/pi % Finally, knowing Yaw and Pitch, using one of the remining 4 % equations to find Roll. for i = 1 : 4 ifi<3 cY = cos(YawRl); sY = sin(YawRl); YawD = YawRl*l 80/pi; else cY = cos(YawR2); sY = sin(YawR2); YawD = YawR2*l 80/pi; end if i = 1 cP = cos(PitchRl); sP = sin(PitchRl); PitchD = PitchRl*l 80/pi; elseif i == 2 cP = cos(PitchR2); sP = cos(PitchR2); PitchD = PitchR2*l 80/pi; elseif i = 3 cP = cos(PitchR3); sP = sin(PitchR3); PitchD = PitchR3*l 80/pi; else cP = cos(PitchR4); sP = cos(PitchR4); PitchD = PitchR4*l 80/pi; end % cP = cos(PitchR); % sP = sin(PitchR); syms cR sR Eql2A = subs(Eql2); Eq22A = subs(Eq22); Eql3A = subs(Eql3); Eq23A = subs(Eq23); % use each of the 4 equations to solve for Roll and pick the roll angle % which gives the minimum error CoeCR12 = diff(Eql2A, cR); CoeSR12 = diff(Eql2A, sR); CoeCR12A = eval(CoeCR12); CoeSR12A = eval(CoeSR12); CoeCR13 = diff(Eql3A, cR); CoeSR13 = diff(Eql3A, sR); CoeCR13A = eval(CoeCR13); CoeSR13A = eval(CoeSR13); CoeCR22 = diff(Eq22A, cR); CoeSR22 = diff(Eq22A, sR); CoeCR22A = eval(CoeCR22); CoeSR22A = eval(CoeSR22); CoeCR23 = diff(Eq23A, cR); CoeSR23 = diff(Eq23A, sR); CoeCR23A = eval(CoeCR23); CoeSR23A = eval(CoeSR23); RollR12 = atan(-CoeCR12A/CoeSR12A); RollR13 = atan(-CoeCR13A/CoeSR13A); RollR22 = atan(-CoeCR22A/CoeSR22A); RollR23 = atan(-CoeCR23A/CoeSR23A); cR = cos(RollR12); sR = sin(RollR12); Equatsl2 = subs(Eql 1); %Equatsl2 = [eval(subs(Eqll)); eval(subs(Eql2)); eval(subs(Eql3)); eval(subs(Eq21)); eval(subs(Eq22)); eval(subs(Eq23))] Equatsl2 = [subs(Eqll); subs(Eql2); subs(Eql3); subs(Eq21); subs(Eq22); subs(Eq23)]; Suml2 = sum(abs(Equatsl2)); cR = cos(RollR13); sR = sin(RollR13); Equatsl3 = [subs(Eql 1); subs(Eql2); subs(Eql3); subs(Eq21); subs(Eq22); subs(Eq23)]; %Equatsl3 = [eval(subs(Eqll)); eval(subs(Eql2)); eval(subs(Eql3)); eval(subs(Eq21)); eval(subs(Eq22)); eval(subs(Eq23))] Suml3 = sum(abs(Equatsl3)); cR = cos(RollR22); sR = sin(RollR22); Equats22 = [subs(Eqll); subs(Eql2); subs(Eql3); subs(Eq21); subs(Eq22); subs(Eq23)]; %Equats22 = [eval(subs(Eql 1)); eval(subs(Eql2)); eval(subs(Eql3)); eval(subs(Eq21)); eval(subs(Eq22)); eval(subs(Eq23))] Sum22 = sum(abs(Equats22)); cR = cos(RollR23); sR = sin(RollR23); Equats23 = [subs(Eqll); subs(Eql2); subs(Eql3); subs(Eq21); subs(Eq22); subs(Eq23)]; %Equats23 = [eval(subs(Eqll)); eval(subs(Eql2)); eval(subs(Eql3)); eval(subs(Eq21)); eval(subs(Eq22)); eval(subs(Eq23))] Sum23 = sum(abs(Equats23)); AllSums = [Suml2, Suml3, Sum22, Sum23]; [rninSiim, b] = min( AllSums); ifb = 1 RollDl =RollR12*180/pi; 75 elseif b = 2 RollDl =RollRl 3*180/pi; elseif b = 3 RollDl =RollR22*l 80/pi; elseif b = 4 RollDl =RollR23*l 80/pi; end if RollDl >0 RollD2= RollDl - 180; elseif RollDl <0 RollD2 = RollDl + 180; end Orient(2*i-l,:) = [YawD, PitchD, RollDl]; Orient(2*i,:) = [YawD, PitchD, RollD2]; end % Add a function here which takes the entire Orient matrix and other % required information and find out which set of Orientation Angles is the % one we are looking for Orient = TrueSolution(ModCamlLoc, ModCam2Loc, clickerLocation, Orient, UVinfol, UVinfo2); %Orient = [YawD, PitchD, RollD]; Appendix D: Camera Placement M A T L A B Programs D.1: Focus and Field of View Optimization Program % This function assumes that the camera used is with zoom lens. So by % adjusting focal length, we adjust the actual D function whatever = sensor2(CamPosition, IndPts, CamPara) NumOfPts = size(IndPts); % Get the number of indication points Xv = CamPosition(3,l); Yv = CamPosition(4,l); Zv = CamPosition(5,l); %CamPara(l) = focal length ap = 8; %CamPara(2); % aperture ps = 0.018555; %CamPara(3); % pixel size Pmin = ps*480; %CamPara(4); % rninimum image plane dimension % CamPara(5)= pixel diagnol. % The following while statements gives the minimum distance from the camera % to the target point to achieve focus of all indication points. The % returned distance is in the name of Dmid. syms Dmid % Compute the x, y, z changes using half point from camera location to the % target point location. dX = Xv-IndPts(l,l); dY = Yv-IndPts(l,2); dZ = Zv-IndPts(l,3); for i=l:NumOfPts(l,l) %Calculate the distance from the current camera location to all the %indication point, the target point being point number 1 D(i, 1) = ((Xv-IndPts(i, 1 ))A2+(Yv-IndPts(i,2))A2+(Zv-IndPts(i,3))A2)A0.5; ifi> 1 % Calculate the distance between indication points and the target % point P(i-1) = ((IndPts( 1,1 )-IndPts(i, 1 ))A2+(IndPts( 1,2)-IndPts(i,2))A2+(IndPts( 1,3)-IndPts(i,3))A2)A0.5; end end % Find BigD as the distance from the target to the camera, MaxD is % furthest point to camera and MinD is closest point to camera BigD = D(l); [MaxD, k] = max(D); [MinD, j] = min(D); % Use modified Cowan's depth of field equation, compute the focal % length that satisfies either MaxD or MinD. The rninimum of the two is % the maximum focal length we can go to still satisfy focus constaints. 77 fstar(l) = ps*MaxD*BigD/(MaxD*ap+MaxD*ps-BigD*ap); fstar(2) = -MiruD*ps*BigD/(Mi^ % Calculate the max distance from the target to any of the indication % point. Form a sphere with that distance as the radius. Then the required field % of view angle is computed as the equation is given by Cowan. SphereR = max(P); alp = asin(SphereRZBigD); % Now we need to find out the maximum focal length that will create a field of % view angle that is no smaller than the required field of view. The % equation is also given by Cowan. fFoV. = l/(2*tan(alp)/Pmin+l/BigD); % Now we have the maximum focal length to satisfy field of view and % focus, the minimum of the two is the maximum of the over all % requirement. whateverl = [min(fstar); fFoV]; whatever2 = min(whateverl); % Next, calculate the orientation of the camera so it centres at the % target point whatever3 = CamOri(CamPosition, IndPts); % Finally combine focal length, focus distance, and camera orientation for % the user to adjust the camera. whatever = [whatever3, whatever2, BigD]; D.2: Camera Orientation Computation Program for ROI view % This function computes the orientation the camera needs to be in order to % centre the target on its image plane. The function takes camera location % and the indication point information as inputs and it return 2 angles, % pan and tilt. function CentreTarget = CamOri(camloc, IndPts) xl = [camloc(l,l); camloc(2,l)]; %syms PHI theta cP sP % First compute the vector linking the camera and the target point CamViewDirection = [IndPts( 1,1 )-camloc(3,1 );IndPts( 1,2)-camloc(4,1 );IndPts( 1,3)-camloc(5,1)]; % Calculate the length of the vector LengthOfVector = sqrt(sum(CamViewDirection.A2)); % Find the unit vector representing the above link vector UnitCamViewDirec = -CamViewDirection/LengthOfVector; % Now we want to match the camera viewing direction to the unit vector % we've just found out, using the two rotations % R_phi = [cos(PHI) -sin(PHI) 0; sin(PHI) cos(PHI) 0; 0 0 1]; %0 0 0 1]; % R_the = [1 0 0; 0 cos(theta) -sin(theta); 0 sin(fheta) cos(theta)];% 0 0 0 1]; % % UnitVector = R_phi*R_fhe*[0; 0; 1]; %UnitCamViewDirec; % The third element of the matrix is cos(theta) which should equal to the z % component of the unit vector. But since we use acos which only gives 0 to pi, % we have to select the rigfh solution depending on the original % orientation angle theta = acos(UnitCamViewDirec(3,l)) - pi; cP = UnitCamViewDirec(2,l)/sin(theta); sP = -UnitCamViewDirec(l,l)/sin(theta); PHI = atan2(sP,cP); CentreTarget = [PHI, theta]; D.3: Occlusion Avoiding Main Program % This function takes an image from each camera and ask the user to select if % there is any occlusion. Depending on the type of occlusion, the function will excute % different process to avoid occlusion at the same time keep the anlges between % cameras within satisfying limits. % The occlusion is defined by 3 points selected by the user, the centre, % one minor axis and one major axis point of the occlusion. % If the occlusion is not crossing the entire image, the program finds the % the direction, either along the minor axis or the major axis, which is % the camera move shortest to avoid occlusion. If the occlusion is crossing % the entire image, the camera only moves along the minor axis function newloc = NBVS(imagel, image2, camlocl, camloc2, allPts, camParal, camPara2) % First read the image and show it on the screen, user makes the obstical % selections Occlude 1 = OccluSelect( image 1); Occlude2 = OccluSelect(image2); ic=[320,240]; if Occlude 1 ==0 % if no obstacle is selected, stay at the same location %newlocl = camlocl; DXcl = 0; DYcl = 0; Dl =0; % 1 vertical = 0; end ifOcclude2 = 0 %newloc2 = camloc2; DXc2 = 0; DYc2 = 0; D2 = 0; % % 2vertical = 0; end % If occlusion is selected if Occlude 1 ~=0 % First compute the depth of the obstacle, we assume that no objects % are wider than 100 mm around the work cell so we use that as a % threshold D1 = sqrt((camloc 1 (3,1 )-allPts( 1,1)) A2+(camloc 1 (4,1 )-allPts( 1,2)) A2+(camloc 1(5,1 )-allPts( 1,3)) A2); dl = camParal(l)*Dl/(Dl-camParal(l)); OTrue = 100; Ouvl = sqrt((Occlude 1(1,1 )-Occlude 1 (2,1 ))A2+(Occlude 1 (1,2)-Occlude 1 (2,2))A2); Zocl = (dl*OTrue)/Ouvl; % Compute the viewing cone radius at the depth Zoc. RatZl = (camParal(4)/(2*dl))*Zocl; % Compare the distance from the centre of the image to all 4 axis % points. Which ever gives the shortest distance will be the point % of the reference and the axis of moving direction. minorDO = sqrt((Occludel(2,l)-320)A2+(Occludel(2,2)-240)A2); minorDl = sqrt((Occludel(3,l)-320)A2+(Occludel(3,2)-240)A2); if minorD 1 > minorDO minorD = rrtinorDO; minorP = Occludel(2,:); else rninorD = minorDO; minorP = Occludel(3,:); end majorDO = sqrt((Occludel(4,l)-320)A2+(Occludel(4,2)-240)A2); majorDl = sqrt((Occludel(5,l)-320)A2+(Occludel(5,2)-240)A2); if majorDl > majorDO majorD = majorDO; majorP = Occludel(4,:); else majorD = majorD 1; majorP = Occlude 1(5,:); end VerticePt = [majorP(l,l)+minorP(l,l)-Occludel(l,l), majorP(l,2)+minorP(l,2)-Occludel(l,2)]; if majorD < minorD % use majorP and VerticePt % use dot product measure the angle of MV and CV. If positive, find % the normal, else using VerticePt MV = majorP-VerticePt; CV = Occlude l(l,:)-VerticePt; anglebet = dot(MV.CV); if anglebet > 0 % find the normal and its intersection as the point of % reference PtRef = twovect(ic,majorP,VerticePt); else PtRef = VerticePt; end else % use minorP and verticePt. Then same as above. MV = minorP-VerticePt; CV = Occludel(l,:)-VerticePt; anglebet = dot(MV,CV); 80 if anglebet > 0 % find the normal and its intersection as the point of % reference PtRef = twovect(ic, minorP, VerticePt); else PtRef = VerticePt; end end % The direction of the move is along the vector connecting PtRef and Image centre. SShift = ic-PtRef; SSlope = atan2(-SShift(l,2),SShift(l,l)); DSShift = sqrt(SShift(l,l)A2+SShift(l,2)A2); RImage = sqrt(320A2+240A2); UVCirc leRl = RImage-DSShift; % Now the shift in xy directions still in uv unit is the % following: UShiftl =cos(SSlope)*UVCircleRl; VShiftl = -sin(SSlope)*UVCircleRl; % Convert the uv coordinate into real distance. Note the sign change % for y direction because the camera y is pointing down instead of the % normal up direction D X c l = UShi ft 1 *Zoc 1 *D l/(dl *(D 1 -d 1)); D Y c l = VShif t l*Zocl*Dl/(dl*(Dl-dl) ) ; end % Modified upto here... i f Occlude2 ~= 0 % First compute the depth of the obstacle, we assume that no objects % are wider than 100 mm around the work cell so we use that as a % threshold D2 = sqrt((camloc2(3,1 )-allPts( 1,1 ))A2+(camloc2(4,1 )-allPts( 1,2))A2+(camloc2(5, l)-allPts(l ,3))A2); d2 = camPara2(l)*D2/(D2-camPara2(l)); OTrue = 200; Ouv2 = sqrt((Occlude2(l, l)-Occlude2(2,1 ))A2+(Occlude2( 1,2)-Occlude2(2,2))A2); Zoc2 = (d2*OTrue)/Ouv2; % Compute the viewing cone radius at the depth Zoc. RatZ2 = (camPara2(4)/(2*d2))*Zoc2; % Compare the distance from the centre of the image to all 4 axis % points. Which ever gives the shortest distance will be the point % of the reference and the axis of moving direction. minorDO = sqrt((Occlude2(2,l)-320)A2+(Occlude2(2,2)-240)A2); minprDl = sqrt((Occlude2(3,l)-320)A2+(Occlude2(3,2)-240)A2); if minorD 1 > minorDO minorD = minorDO; minorP = Occlude 1(2,:); else minorD = minorDO; minorP = Occlude 1(3,:); end majorDO = sqrt((Occlude2(4,l)-320)A2+(Occlude2(4,2)-240)A2); majorDl = sqrt((Occlude2(5,l)-320)A2+(Occlude2(5,2)-240)A2); ifmajorDl > majorDO majorD = majorDO; majorP = Occludel(4,:); else 81 majorD = majorDl; majorP = Occludel(5,:); end VerticePt = [majorP(l,l)+minorP(l,l)-Occlude2(l,l), majorP(l,2)+minorP(l,2)-Occlude2(l,2)]; if majorD < minorD % use majorP and VerticePt % use dot product measure the angle of M V and CV. If positive, find % the normal, else using VerticePt M V = majorP -VerticePt; C V = Occlude2(l,:)-VerticePt; anglebet = dot(MV,CV); i f anglebet > 0 % find the normal and its intersection as the point of % reference PtRef = l\vovect(ic, majorP, VerticePt); else PtRef = VerticePt; end else % use minorP and verticePt. Then same as above. M V = minorP-VerticePt; C V = Occlude2(l,:)-VerticePt; anglebet = dot(MV,CV); i f anglebet > 0 % find the normal and its intersection as the point of % reference PtRef = l\vovect(ic, minorP, VerticePt); else PtRef = VerticePt; end end SShift = ic-PlRef; SSlope = atan2(-SShift(l,2),SShift(l,l)); DSShift = sqrt(SShift(l,l)A2+SShift(l,2)A2); RImage = sqrt(320A2+240A2); UVCircleR2 = RImage-DSShift; % Now the shift in xy directions still in uv unit is the % following: UShift2 = -cos(SSlope)*UVCircleR2; VShift2 = sin(SSlope)*UVCircleR2; % Convert the uv coordinate into real distance. Note the sign change % for y direction because the camera y is pointing down instead of the % normal up direction DXc2 = UShift2*Zoc2*D2/(d2*(D2-d2)); DYc2 = -VShift2*Zoc2*D2/(d2*(D2-d2)); end J = 0; 1 = 0; while J == 0 OccCent = input('Does Occluding object cover the centre of the image', 's'); if lower(OccCent) == 'y' Movel = [ -DXcl ; -DYcl ] else Movel = [ D X c l ; D Y c l ] end Move2 = [DXc2; DYc2] % Compute the movement in robot world Translatel =[100 camlocl(3,l); 0 1 0 camlocl(4,l); 0 0 1 camlocl(5,l); 0 0 0 1]; RotateZl = [cos(camloc 1(1,1)) -sin(camlocl(l,l)) 0 0; sin(camloc 1(1,1)) cos(camloc 1(1,1)) 0 0; 0 0 1 0; 0 0 0 1]; RotateXl = [1 0 0 0; 0 cos(camlocl(2,l)) -sin(camlocl(2,l)) 0; 0 sin(camloc 1(2,1)) cos(camloc 1(2,1)) 0; 0 0 0 1]; Translate2 = [100 camloc2(3,l); 0 1 0 camloc2(4,l); 0 0 1 camloc2(5,l); 0 0 0 1]; RotateZ2 = [cos(camloc2(l,l)) -sin(camloc2(l,l)) 0 0; sin(camloc2(l,l)) cos(camloc2(l,l)) 0 0; 0 0 1 0; 0 0 0 1]; RotateX2 = [1 0 0 0; 0 cos(camloc2(2,l)) -sin(camloc2(2,l)) 0; 0 sin(camloc2(2,l)) cos(camloc2(2,l)) 0; 0 0 0 1]; MovelnWorld 1 = Translatel *RotateZl*RotateXl*[Movel(l,l);Movel(2,l);0;l]; ProposedLoc 1 = [0;0;MoveInWorld 1 (1) ;MoveIn World 1 (2);MoveInWorldl (3);camloc 1 (6,1)]; %ProposedLoc 1 = camlocl-ProposedMovel; NewCamOril = sensor2(ProposedLocl, allPts, camParal); NewCamLocl = [NewCamOril (1,1 );NewCamOril(l,2);ProposedLocl(3);ProposedLocl(4);ProposedLocl(5);ProposedLoc l(6);NewCamOril(l,3);NewCamOril(l,4)]; MoveInWorld2 = Translate2*RotateZ2*RotateX2*[DXc2;DYc2;0;l]; ProposedLoc? = [0;0;MoveInWorld2(l);MoveInWorld2(2);MoveInWorld2(3);camloc2(6,l)]; %ProposedLoc2 = camloc2-ProposedMove2; NewCamOri2 = sensor2(ProposedLoc2, allPts, camPara2); ProposedLoc2( 1 )=NewCamOri2( 1,1); ProposedLoc2(2)=NewCamOri2(l,2); NewCamLocZ = [ProposedLoc2;NewCamOri2(l,3);NewCamOri2(l,4)]; if I ~= 0 break; % need to test if the cameras are within the angle limit, elseif D l = = 0 & D 2 = = 0 %newloc = [NewCamLocl, NewCamLoc2]; break; else NextMove = TestLimit(camloc 1, camloc2, Movel, Move2, NewCamLocl, NewCamLoc2, D l , D2); end I = NextMove(2,l); if I == 0 break; elseif I == 1 DXcl =NextMove(l,l); elseif I == 2 DXc2 = NextMove(l,l); 83 elseif I == -1 UShiftl = UShiftl-640; D X c l =UShif t l*Zocl*Dl/(dl*(Dl -dl) ) ; elseif I == -2 UShift2 = UShift2-640; DXc2 = UShift2*Zoc2*D2/(d2*(D2-d2)); end end newloc = [NewCamLocl, NewCanxLoc2]; function interpt = twovect(IC, M P , VP) M V = MP-VP; y = (MV( l , l ) * IC( l , l ) *MV( l ,2 )+MV( l , l ) *MP( l , 2 )*MV( l , l ) -M V ( 1,1 )*MP( 1,1 )*MV( 1,2)+MV( 1,2)A2*IC( 1,2))/(MV( 1,1 )A2+MV( 1,2)A2); x = (MP( l ,2 )*MV(l , l ) -MV(l , l )*y-MP( l , l )*MV(l ) 2) ) / ( -MV(l ) 2) ) ; interpt = [x, y]; DA: Occlusion Selection Program function obs = OccluSelect(image) I = imread(imaye); imshow(I) count = 0; select = 0; % Ask the user if there is occlusion occlude=input('Press Y to select the obstacle or N to exit, followed by Enter:','s'); % i f yes, ask the user to identify the occlusion while lower(occlude) == 'y' count = count + 1; select = 1; Instruction = strvcat('Use the left mouse button to select:','First point define the centre of the object;','Second point define the minor axis of the object;',Third - the major axis of the obstacle closer to the centre of the image;','NOTE: To end, press Return') ready = input('When ready, press Y. ' . 's'); % by clicking the mouse, the program registers the point information % which to form the occlusion region, [c, r, P] = impixel(I) % i f the points are selected successfully, register the points % if points are not selected well, ask user to do it again. iflength(c) ~= 3 warning = 'You have not selected 3 points. Please start again.' nextview = camloc; elseif length(c) == 3 84 if count == 1 occlude 1 = [c, r]; elseif count > 1; temp = [c, r]; occlude 1 = [obs; temp]; end end occlude = input('Do you want to select another obstacle, Y or N?','s'); end if select == 0 obs=[0]; else centreO = occludel(l,:); minorO = occludel(2,.); majorO = occludel(3,:); % first find the other minor and major axis points minorl = [2*occludel(l,l)-occludel(2,l), 2*occludel(l,2)-occludel(2,2)]; majorl = [2*occludel(l,l)-occludel(3,l), 2*occludel(l,2)-occludel(3,2)]; obs = [centreO; minorO; minorl; majorO; majorl]; end D.5: 3D Visibility Limit Testing Program % This program tests if the proposed camera locations are within the % required limits and returns the final camera location if it is within % limit, if not, it computes where and which camera to move to get back to % limit function OkLoc = TestLimit(CamLoc 1, CamLoc2, Movel, Move2, NewCamLocl, NewCamLoc2, Dl, D2) % first check the angle that seperates the two cameras 1 = 0; AngleSep = merge2(NewCamLoc 1, NewCamLoc2); DAngleSepl = merge2(NewCamLocl, CamLoc2); DAngleSep2 = merge2(CamLocl, NewCamLoc2); OldAngleSep = merge2(CamLoc 1, CamLoc2); DSepl = OldAngleSep - DAngleSepl; DSep2 = OldAngleSep - DAngleSep2; % if the limit is exceeded, then check the effects on each move and decide % what to do. NOTE that only the x-direction move is needed here since % moving in y-direction does not change the angle significantly, if AngleSep <= 140 & AngleSep >= 40 OkLoc = [0;I]; elseif AngleSep < 40 %& D l ~= 0 & D2 ~= 0 if DSepl <0 & DSep2<0 if D l > D2 % Move camera2 1 = 2; AngleNeeded = 40 - DAngleSepl; NewMove2 = tan(AngleNeeded)*D2*sign(Move2(l,l)); OkLoc = [NewMove2; I]; else % Move camera 1 1=1; 85 AngleNeeded = 40 - DAngleSep2; NewMove 1 = tan( AngleNeeded)* D1 *sign(Move 1(1,1)); OkLoc = [NewMovel; I]; end elseif DSepl < 0 & DSep2 >= 0 % Move camera 1 1=1; AngleNeeded = 40 - DAngleSep2; NewMovel = tan(AngleNeeded)*Dl*sign(Move 1(1,1)); OkLoc = [NewMovel; I]; elseif DSepl >= 0 & DSep2 < 0 % Move Camera 2 1 = 2; AngleNeeded = 40 - DAngleSepl; NewMove2 = tan(AngleNeeded)*D2*sign(Move2(l,l)); OkLoc = [Ne\vMove2; I]; elseif DSepl >= 0 & DSep2 >= 0 if DSepl ==DSep2 % Move the camera that is closer to target away from the other % camera if D l > D2 % Move Camera 2 if DSepl ==0 I = 2; DCam = NewCamLocl - CamLoc2; LDCam = sqrt(DCam(3,l)A2+DCam(4,l)A2+DCam(5,l)A2); UDCam =[DCam(3,l)/LDCam; DCam(4,l)/LDCam; DCam(5,l)/LDCam] XDirect = mergeX(NewCamLoc 1, CamLoc2); CamDot = dot(UDCam, XDirect(:,2)); if CamDot < 0 %Move in positive X AngleNeeded = 40 - DAngleSepl; NewMove2 = tan(AngleNeeded)*D2; OkLoc = [NewMove2; 1]; else %Move in negative X AngleNeeded = 40 - DAngleSepl; NewMove2 = -tan(AngleNeeded)*D2; OkLoc = [NewMove2; I]; end else I = -2; OkLoc = [0; I]; %move 2 in opposite end else % Move Camera 1 if DSepl ==0 1= 1; DCam = NewCamLoc2 - CamLocl; LDCam = sqrt(DCam(3..1)A2+DCam(4,l)A2+DCam(5,l)A2); UDCam =[DCam(3,1 VLDCam; DCam(4,l)/LDCam; DCam(5,l)/LDCam] XDirect = mergeX(NewCamLoc2, CamLocl); CamDot = dotfUDCam, XDirect(:,2)); i fCamDot<0 %Move in positive X AngleNeeded = 40 - DAngleSep2; NewMovel = tan(AngleNeeded)*Dl; OkLoc = [NewMovel; I]; else %Move in negative X Angle-Needed = 40 - DAngleSep2; NewMovel = -tan(AngleNeeded)*Dl; OkLoc = [NewMovel; I]; end else %move 1 in opposite I = - l ; OkLoc = [0; I]; end end elseif DSepl > DSep2 % Move Camera one in the opposite direction I = - l ; OkLoc = [0; I]; else % Move Camera two in the opposite direction I = -2; OkLoc = [0; I]; end end elseif AngleSep > 140 %& D l ~= 0 & D2 ~= 0 i f DSepl <= 0 & DSep2 <= 0 if DSepl ==DSep2 % Move the camera that is closer to target towards the other % camera if D l > D2 % Move Camera 2 if DSepl ==0 1 = 2; DCam = NewCamLocl - CamLoc2; LDCam = sqrt(DCam(3,l)A2+DCam(4,l)A2+DCam(5,l)A2); UDCam =[DCam(3,l)/LDCam; DCam(4,l)/LDCam; DCam(5,l)/LDCam] XDirect = mergeX(NewCamLocl, CamLoc2); CamDot = dot(UDCam, XDirect(:,2)); i fCamDot>0 %Move in positive X AngleNeeded = 140 - DAngleSepl; NewMove2 = tan(AngleNeeded)*D2; OkLoc = [NewMove2; I]; else %Move in negative X AngleNeeded = 140 - DAngleSepl; NewMove2 = -tan(AngleNeeded)*D2; OkLoc = [NewMove2; I]; end else I = -2; OkLoc = [0; I];%move 2 in opposite end else % Move Camera 1 if DSepl ==0 1= 1; DCam = NewCamLoc2 - CamLoc 1; LDCam = sqrt(DCam(3,l)A2+DCam(4 )l)A2+DCam(5,l)A2); UDCam =[DCam(3,l)/LDCam; DCam(4,l)/LDCam; DCam(5,l)/LDCam] XDirect = mergeX(NewCamLoc2, CamLocl); CamDot = dot(UDCam, XDirect(:,2)); if CamDot>0 %Move in positive X AngleNeeded = 140 - DAngleSep2; NewMovel = tan(AngleNeeded)*Dl; OkLoc = [NewMovel; I]; else %Move in negative X AngleNeeded = 140 - DAngleSep2; NewMovel = -tan(AngleNeeded)*Dl; OkLoc = [NewMovel; I]; end else I = - l ; %move 1 in opposite OkLoc = [0; I]; end end elseif DSepl > DSep2 % Move Camera two in the opposite direction I = -2; OkLoc = [0; I] else % Move Camera one in the opposite direction I = - l ; OkLoc = [0; I] end elseif DSepl <= 0 & DSep2 > 0 % Move Camera two furthur in the same direction 1 = 2; AngleNeeded = 140 - DAngleSepl; NewMove2 = tan(AngleNeeded)*D2*sign(Move2(l,l)); OkLoc = [NewMove2; I]; elseif DSepl > 0 & DSep2 <= 0 % Move Camera one further in the same direction 1=1; AngleNeeded = 140 - DAngleSepZ; NewMovel = tan(AngleNeeded)*Dl*sign(Movel(l,l)); OkLoc = [NewMovel; I]; elseif DSepl > 0 & DSep2 > 0 if D1 > D2 % Move Camera 2 1 = 2; AngleNeeded = 140 - DAngleSepl; NewMove2 = tan(AngleNeeded)*D2*sign(Move2(l,l)); OkLoc = [NewMove2; I]; else % Move camera 1 1=1; AngleNeeded = 140 - DAngleSep2; NewMovel = tan(AngleNeeded)*Dl*sign(Movel(l,l)); OkLoc = [NewMovel; I] end end end % This function finds the viewing directions of both cameras. It is used in % TestLimit.m which is a sub function of NBW.m function viewsep = mergeX(camloc 1, camloc2) % First compute the viewing directional vector for camera 1 in in world % frame Translatel = [1 0 0 camloc 1(3,1); 0 1 0 camlocl(4,l); 0 0 1 camlocl(5,l); 0 0 0 1]; RotateZl = [cos(camloc 1(1,1)) -sin(camloc 1(1,1)) 0 0; sin(camloc 1(1,1)) cos(camloc 1(1,1)) 0 0; 0 0 1 0; 0 0 0 1]; RotateXl = [1 0 0 0; 0 cos(camlocl(2,l)) -sin(camlocl(2,l)) 0; 0 sin(camlocl(2,l)) cos(camloc 1(2,1)) 0; 0 001]; ViewDirectionl = RotateZl*RotateXl*[0; 0; 1; 1]; XDirectionl = RotateZl*RotateXl*[l; 0; 0; 1]; % Then calculate the viewing directional vector for camera2 in world frame Translate2 = [100 camloc2(3,l); 0 1 0 camloc2(4,l); 0 0 1 camloc2(5,l); 000 1]; RotateZ2 = [cos(camloc2(l,l)) -sin(camloc2(l,l)) 0 0; sin(camloc2(l,l)) cos(camloc2(l,l)) 0 0; 0 0 1 0; 0 00 1]; RotateX2 = [1 0 0 0; 0 cos(camloc2(2,l)) -sin(camloc2(2,l)) 0; 0 sin(camloc2(2,l)) cos(camloc2(2,l)) 0; 0 00 1]; ViewDirection2 = RotateZ2*RotateX2*[0; 0; 1; 1]; XDirection2 = RotateZl*RotateXl*[l; 0; 0; 1]; % use dot product to find the angle formed by the two directional vectors DirectionlNWorld 1 = [ViewDirectionl(1); ViewDirectionl(2); ViewDirectionl(3)]; DirectionINWorld2 = [ViewDirection2(l); ViewDirection2(2); ViewDirection2(3)]; viewsep = [XDirectionl, XDirection2] 89 Appendix E : Experimental Support Programs E.1: MATLAB Program Comparing the Gradient Level of Images for Focus Testing % This is a program which can take a series of images at different focus % settings of a zoom lens and detects at which setting the image is most % focused. F = 32; %define total focus range deltaF = F*0.25; %define the change in focus range fc = F*0.5; %start at the centre of the focus range %create figure windows scrsz = get(0,'ScreenSize'); handles.fig(l) = figure('Position',[l scrsz(4)/2 scrsz(3)/2 scrsz(4)/2]); handles.fig(2) = figure('Position',[scrsz(4)/2 scrsz(4)/2 scrsz(3)/2 scrsz(4)/2]); handles.fig(3) = figure('Position',[l 1 scrsz(3)/2 scrsz(4)/2]); handles.fig(4) = figure('Position',[scrsz(4)/2 1 scrsz(3)/2 scrsz(4)/2]); while deltaF >= 1 %read images filename = sprintf('neonfocus%d.tif ,fc); 11 = imread(filename); fr = fc + deltaF; %define the focus location to the right %read images filename = spnntf('neonfocus%d.tif ,fr); 12 = imread(filename); fl = fc - deltaF; %define the focus loation to the left %read images filename = sprintf('neonfocus%d.tif ,fl); 13 = imread(filename); % if image is colour convert image from colour to black and white if length(size(ll)) == 3 Ibwl = rgb2gray(Il); else Ibwl =11; end iflength(size(I2)) = 3 Ibw2 = rgb2gray(I2); else Ibw2 = 12; end iflength(size(I3)) ==3 Ibw3 = rgb2gray(I3); 90 else Ibw3 = 13; end %ImageSize 1 =size(11); Ila = im2double(lbwl); %convert data to double pres. I2a = im2double(lbw2); I3a = im2double(Ibw3); [FXl,FYl]=gradient(Ila); %fmding the gradient of each image [FX2,FY2]=gradient(I2a); [FX3,FY3]=gradient(I3a); GradSizeXl =size(FXl); GradSizeYl = size(FYl); %Calculate the magnitute of the gradient SQGradXl =FX1.*FX1; SQGradYl =FY1.*FY1; SUMSQ1 = SQGradXl + SQGradYl; MagGradl = SUMSQ1.A0.5; %To measue focus, we want to measure from the centre of the images. onequal = size(MagGradl)/4; %measure the size of the centre of the imag centreend = onequal*3; centreX = onequal(l,l); centreY = onequal(l,2); centreXe = centreend(l,l); centre Ye = centreend(l,2); %find the max value of the mag. of grad. of the centre of the image A l = max(MagGradl(centreX:centreY,centreXe:centreYe)); Bl = max(Al); %first find row max, then find column max SQGradX2 = FX2 .*FX2; SQGradY2 = FY2.*FY2; SUMSQ2 = SQGradX2 + SQGradY2; MagGrad2 = SUMSQ2.A0.5; A2 = max(MagGrad2(centreX:centreY,centreXe:centreYe)); B2 = max(A2); SQGradX3 =FX3.*FX3; SQGradY3 =FY3 .*FY3; SUMSQ3 = SQGradX3 + SQGradY3; MagGrad3 = SUMSQ3.A0.5; A3 = max(MagGrad3(centreX:centi'eY,centreXe:centreYe)); B3 = max(A3); %determine which image is most focused. G = [B1 B2B3]; [C, I] = max(G); % subplot(2,2,l),imshow(Il). % subplot(2,2,2),imshow(I2) % subplot(2,2,3),imshow(I3) % if still looking, change the centre of the focus according to which % image is more focused if I == 1 % subplot(2,2,4),imshow(Il) fc elseif I == 2 % subplot(2,2,4),imshow(I2) fc = fr elseif I == 3 % subplot(2,2,4),imshow(I3) fc = fl end deltaF = 0.5*deltaF; %half the change of focus 11 =imresize(Il, 0.55); 12 = imresize(12, 0.55); 13 = imresize(I3, 0.55); \ figure(handles.fig(l));imshow(Il) figure(handles.fig(2));imshow(I2) figure(handles. fig(3)); imshow(I3) %if the change of focus is too small, exit the program and display the %focused image if deltaF < 1 if I = 1 figure(handles.fig(4));imshow(ll) fc elseif I == 2 figure(handles.fig(4));imshow(12) fc = fr else ftgure(hand les. fig(4)) ;imshow(I3) fc = fl end break end end E.2: Path Point Orientation Testing Program % This program computes the orientation error of the known position with % the computed position in terms of axis angles. function AxA = OrientError(realY, realP, realR, testY, testP, testR) syms Y P R RPY = [cos(Y)*cos(P) -sm(Y)*cos(R)+cos(Y)*sin(P)*sin(R) sin(Y)*sin(R)+cos(Y)*sin(P)*cosl sin(Y)*cos(P) cos(Y)*cos(R)+sin(Y)*sin(P)*sin(R) -cos(Y)*sin(R)+sin(Y)*sin(P)*cos(R); -sin(P) cos(P)*sin(R) cos(P)*cos(R)]; Y = -testY*pi/180; P = -testP*pi/180; R = -testR*pi/180; RI = subs(RPY); Y = -realY*pi/180; P = -realP*pi/180; R = -realR*pi/180; R2 = subs(RPY); D = R1*R2'; AxA = acos((trace(D)-l)/2) 93 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items