Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Empirical evaluation of information for robotic manipulation tasks Mulligan, Jane 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-14805X.pdf [ 9.47MB ]
Metadata
JSON: 831-1.0051521.json
JSON-LD: 831-1.0051521-ld.json
RDF/XML (Pretty): 831-1.0051521-rdf.xml
RDF/JSON: 831-1.0051521-rdf.json
Turtle: 831-1.0051521-turtle.txt
N-Triples: 831-1.0051521-rdf-ntriples.txt
Original Record: 831-1.0051521-source.json
Full Text
831-1.0051521-fulltext.txt
Citation
831-1.0051521.ris

Full Text

E M P I R I C A L E V A L U A T I O N OF I N F O R M A T I O N FOR ROBOTIC MANIPULATION TASKS By JANE MULLIGAN B . C . S . H . Acadia University 1984 M.Sc. (Computer Science) University of British Columbia 1988 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E OF D O C T O R OF P H I L O S O P H Y in T H E F A C U L T Y OF G R A D U A T E STUDIES ( D E P A R T M E N T OF C O M P U T E R SCIENCE)  We accept this thesis as conforming to the recmirerl standard  T H E U N I V E R S I T Y OF BRITISH C O L U M B I A August 1996 ©  I.J. Mulligan,  1996  In presenting  this  degree at the  thesis  in  partial fulfilment  of  University of  British Columbia,  I agree  freely available for reference copying  of  department  this or  publication of  and study.  thesis for scholarly by  this  his  or  her  the  requirements that the  I further agree  purposes  representatives.  may be It  thesis for financial gain shall not  that  by the  allowed  head  that without  rnmpnt.Pr S r . i P n r . P  The University of British Columbia Vancouver, Canada  Date  August 2 7 ,  1996  i  DE-6 (2/88)  of  my  copying  or  my written  permission.  Department of  advanced  Library shall make it  understood be  an  permission for extensive  granted  is  for  Abstract  R i g o r o u s a n a l y s i s a n d e v a l u a t i o n of r e a l i m p l e m e n t e d r o b o t i c s y s t e m s for i n t e l l i g e n t t a s k s are r a r e l y p e r f o r m e d . S u c h s y s t e m s are often e x t r e m e l y c o m p l i c a t e d , d e p e n d i n g n o t only on 'interesting' theoretical parameters and models, but on m a n y assumptions and c o n s t a n t s w h i c h m a y be set a l m o s t a r b i t r a r i l y . T h e i n f o r m a t i o n r e q u i r e d t o a c h i e v e g o o d t a s k p e r f o r m a n c e i n c l u d e s a l l o f these a c k n o w l e d g e d a n d u n a c k n o w l e d g e d p a r a m e t e r s .  In  order to evaluate a n d compare systems w h i c h e m p l o y diverse components a n d m e t h o d s , we n e e d a f r a m e w o r k a n d c r i t e r i a b y w h i c h t o m e a s u r e t h e m . W e d e m o n s t r a t e  techniques  for e v a l u a t i n g a s y s t e m ' s p e r f o r m a n c e as a f u n c t i o n o f t h e p a r a m e t e r s a n d i n f o r m a t i o n i t uses a n d c o m p a r e different t a s k i m p l e m e n t a t i o n s o n t h i s b a s i s . W e v i e w a l l t a s k i m p l e m e n t a t i o n s as p a r t i c u l a r p a r a m e t e r i z a t i o n s o f t h e t a s k goals they represent.  S o m e p a r a m e t e r s b e l o n g t o f i x e d , p r e - m e a s u r e d m o d e l s ; o t h e r s are d a t a  c o l l e c t e d or m e a s u r e d o n l i n e b y t h e s y s t e m . C o m p a r i n g s y s t e m s t h e n b e c o m e s t h e c o m p a r i s o n o f these p a r a m e t e r i z a t i o n s .  T h e r e are t h r e e k e y q u e s t i o n s we n e e d t o ask w h e n  d e t e r m i n i n g t a s k p a r a m e t e r i z a t i o n s : W h a t do we n e e d t o m e a s u r e ( o b s e r v a b i l i t y ) ? H o w w e l l m u s t we d i s c r i m i n a t e b e t w e e n m e a s u r e d values ( p r e c i s i o n ) ?  and H o w accurately  m u s t o u r m e a s u r e m e n t s reflect t h e t r u e w o r l d s t a t e ( a c c u r a c y ) ?  W e present a p e r f o r -  m a n c e b a s e d f r a m e w o r k for e m p i r i c a l e v a l u a t i o n a n d c o m p a r i s o n o f t a s k i n f o r m a t i o n b a s e d o n these t h r e e b a s i c n o t i o n s o f observability, racy.  precision  (discrimination) and  accu-  F a c t o r i a l e x p e r i m e n t s g i v e us a s t a r t i n g p o i n t for d e t e r m i n i n g w h i c h m e a s u r e m e n t s ,  a n d t h e i r i n t e r a c t i o n s , define t h e t a s k s u b s p a c e . S e n s i t i v i t y a n a l y s i s d e t e r m i n e s o p t i m a l p a r a m e t e r values a n d t h e effect o f u n c e r t a i n t y a n d v a r i a t i o n s i n m e a s u r e m e n t s o n t a s k p e r f o r m a n c e . F i n a l l y a c o s t / p e r f o r m a n c e m e t r i c offers a q u a n t i f i c a t i o n o f t h e t a s k c o m p l e x i t y w i t h respect t o t h e p a r a m e t e r s , a n d p e r f o r m a n c e b a s e d p r e c i s i o n a n d a c c u r a c y r e q u i r e m e n t s d e t e r m i n e d i n t h e p r e v i o u s steps. T h e experiments presented to demonstrate  t h i s m e t h o d o l o g y are b a s e d o n a p a r t  o r i e n t i n g t a s k i m p l e m e n t e d i n t w o v e r y different w a y s .  O n e s y s t e m uses a 'sensorless'  m o d e l - b a s e d p u s h - o r i e n t i n g m e t h o d , t h e o t h e r uses a r e a l - t i m e stereo v i s i o n s y s t e m t o ii  localize objects in the workspace for sensor-driven part orienting. The parameters used to represent manipulated parts for sensing versus model-based manipulation are similar, though not identical sets, and encode the information critical to the task. Through detailed experiments we establish the statistically significant parameters and parameter interactions for each system, and apply sensitivity analysis to set optimal parameter values and explore the nature of interactions. Finally, the cost/performance metric gives us a means of counting the computation and sensing costs to achieve the observed system error rates. This type of analysis is a necessary step to understanding integrated intelligent systems. It reveals aspects of system implementations which cannot easily be predicted in advance, and gives a clear picture of the information required and the strengths and weaknesses of the system.  111  Table of Contents  Abstract  ii  Table of Contents  iv  List of Tables  viii  List of Figures  ix  Acknowledgment  xi  1  Exploring Information for Robotic Tasks  1  1.1  Information for Robotic Tasks  2  1.1.1  Sensing for Action  5  1.1.2  Parameterization, Representation and Features  7  1.2  1.3  Detailed Analysis of Two Information Modalities  8  1.2.1  Part Orienting  8  1.2.2  Framework for Performance-based Evaluation of Information . . .  9  Thesis Plan  13  2 Information for Robotic Tasks  14  2.1  Measurement and Information  14  2.2  Task Level Parameterizations  16  2.3  State Spaces, Observability and Information for Tasks  18  2.3.1  Task Based Perception  20  2.3.2  Features and Information  25  2.4  Evaluating Task Performance  28  2.4.1  Complexity of Robotic Tasks  28  2.4.2  Part Orienting  30 iv  2.5  Summary  32  3 Evaluating Task Parameterizations 3.1  3.2  3.3  3.4  33  Identifying the Task Subspace  34  3.1.1  35  Design of Experiments  Precision and Accuracy  43  3.2.1  Sensitivity Analysis  43  3.2.2  Measuring Cost versus Performance  45  Comparing Implementations  46  3.3.1  47  Part Orienting Systems  Framework Summary  48  4 Push Orienting  50  4.1  Apparatus  50  4.2  The Robot  51  4.2.1  52  Straight Line Interpolated Motion  4.3  Push Planner  52  4.4  Robust Push Plans  55  5 Vision-based Part Orienting  59  5.1  Introduction  59  5.2  Stereo Vision  61  5.2.1  Calibration  61  5.2.2  Standard Transformations  62  5.2.3  Edge-based Correlation  64  5.2.4  D A T A C U B E Pipeline Implementation  64  5.3  5.4  Part Localization  74  5.3.1  Right Angle Part Localization  75  5.3.2  " G " Localization  78  Summary  80  v  6 Experimental Results 6.1  6.2  6.3  6.4  81  System Task Parameterizations  81  6.1.1  85  Test Parts  Methods  87  6.2.1  The | and \ Fractional Factorial Designs  87  6.2.2  Simulators  89  Push Planner — Parameters and Outcomes  90  6.3.1  Triangle Push Planner Results  93  6.3.2  " L " Push Planner Results  95  6.3.3  " G " Push Planner Results  101  Vision and Manipulation •— Parameters and Outcomes  104  6.4.1  Triangle Vision-based Orienter Results  106  6.4.2  " L " Vision-based Orienter Results  109  6.4.3  " G " Vision-based Orienter Results  114  7 Analysis of Results  119  7.1  Optimization of Information for Tasks  119  7.2  Cost/performance Analysis  122  7.2.1  The Formalism  122  7.2.2  Vision Orienting  124  7.2.3  Push Orienting  128  7.2.4  Summary  131  8 Conclusion 8.1  132  Information for Tasks 8.1.1  132  A Methodology for Comparing and Evaluating Task Implementations 132  8.2  Comparison of Task Implementations  133  8.3  Future Work  134  Bibliography  135  vi  Appendices  141  A Treatments and Outcomes - Push Planner  141  A.l  A.2  A. 3  Triangle  141  A.1.1  Success Rate  141  A . 1.2  Planning T i m e  145  " L " Part  148  A.2.1  Success Rate  148  A.2.2  Planning Time  152  " G " Part  155  A.3.1  Success Rate  155  A . 3.2  Planning T i m e  159  B Treatments and Outcomes - Vision Orienter B. l  B.2  B.3  162  Triangle  162  B. l . l  Success Rate  162  B.1.2  Extraction Time  164  " L " Part  166  B.2.1  Success Rate  166  B.2.2  Extraction Time  168  " G " Part  170  B.3.1  Success Rate  170  B.3.2  Extraction Time  172  vn  List of Tables  3.1  ANOVA Table  38  3.2  Factorial coefficients  39  3.3  ANOVA for Factorial Experiments  42  6.1  Push Planner Parameters  82  6.2  Vision System Parameters  84  6.3  Push parameter levels  90  6.4  Triangle push planner success ANOVA  91  6.5  Triangle push planner time ANOVA  92  6.6  "L" push planner success ANOVA  97  6.7  "L" push planner time ANOVA  98  6.8  "G" push planner success ANOVA  101  6.9  "G" push planner time ANOVA  102  6.10 Vision parameter levels  106  6.11 Triangle vision success ANOVA  107  6.12 Triangle vision time ANOVA  108  6.13 "L" vision success ANOVA  110  6.14 "L" vision time ANOVA  Ill  6.15 "G" vision success ANOVA  115  6.16 "G" vision time ANOVA  116  viii  List of Figures  1.1  Visual Part Orienter  4  1.2  Fence part feeder  9  2.1  Basic dynamic system diagram  18  2.2  Indiscriminable Sets Induced by Measurement  20  3.1  Task black box  33  3.2  Computing BC interaction  40  4.1  Equipment  51  4.2  Configuration Maps  53  4.3  Alignment error  56  5.1  Stereo Geometry  62  5.2  Max Video 200 Components  65  5.3  Continuous Datacube Pipes  67  5.4  Left Camera  70  5.5  Right Camera  70  5.6  Windowing histogram  71  5.7  Epipolar Approximation  74  5.8  Theta Map  76  5.9  Lines reprojected into image  77  5.10 Data and Line Fits  78  5.11 "G" Vision Model  79  6.1  Triangular part  85  6.2  L-shaped part  86  6.3  G-shaped part  86  ix  6.4  Triangle Sensitivity Surface in D and L  94  6.5  Triangle Search for L with D=26  95  6.6  Triangle push plan success re: part variation  96  6.7  Extended "L" Sensitivity Surface in D and L  99  6.8  "L" push plan success re: part variation  100  6.9  "G" Sensitivity Surface in D and L  103  6.10 Extended "G" Sensitivity Surface in D and L  104  6.11 "G" push plan success re: part variation  105  6.12 Rho discretization versus success  109  6.13 Rho discretization versus time  110  6.14 Number of disparities versus success  112  6.15 Number of disparities versus time  113  6.16 Rho discretization for selected Disparities  113  6.17 Rho discretization for selected disparities versus time  114  6.18 Gap/discretization for "G'  117  6.19 Optimizing gap/discretization for "G'  118  7.1  Histogram of vision system position error for "G"  126  7.2  Histogram of vision system orientation error for "G"  127  7.3  Histogram of push plan orientation error for "G"  130  x  Acknowledgement  I would like to thank Greg Grudic for his constant support and encouragement throughout my PhD, without his help it would have been impossible. Thanks also to Alex Kean who always had insightful comments and encouraging words to contribute. My gratitude to my supervisor Alan Mackworth for supporting me and smoothing the way through the latter, dark days of my degree. Thanks to Peter Lawrence, who got me into robotics in the first place, and to the rest of my supervisory committee who stuck with me even when my ideas seemed far fetched or ill defined. Finally thanks to the many friends whose discussion, suggestions and commiseration helped me through the process.  xi  Chapter 1 Exploring Information for Robotic Tasks  Rigorous analysis and evaluation of real implemented robotic systems for intelligent tasks are rarely performed. Such systems are often extremely complicated, depending not only on 'interesting' theoretical parameters and models, but on many assumptions and constants which may be set almost arbitrarily. The information required to achieve good task performance includes all of these acknowledged and unacknowledged parameters. For any task goal there are many potential robotic systems which could execute a plan to achieve it. These would employ a variety of actuators, information sources and computations. We would like to design task solutions which are optimal, but in order to evaluate and compare systems which employ diverse components and methods, we need a framework and criteria by which to measure them. In the absence of clear and detailed specifications for complex robotic tasks, the process of determining what information is required will inevitably be a cyclical and experimental process. This thesis demonstrates an empirical, performance-based methodology for analysing and comparing the information used in implemented robotic manipulation tasks. In the quest to build intelligent agents, we often study particular subsystems such as reasoning, sensing or motion planning without reference to how to integrate them for the particular tasks we wish to achieve. Often the result is that the agent's subsystems are particularly well tuned to perform in ways which are not relevant to the task at hand. This may be the result of having the elusive goal of building intelligence, making the target tasks ever more ambitious. The goals of intelligent agents are often complicated and difficult to specify and therefore the idea of determining the sensors or 'instrumentation' which will satisfy their requirements is equally vague. The framework provided in this thesis gives specific empirical techniques for analyzing which measurements are critical and what precision and accuracy are required to achieve the desired level of task 1  Chapter 1. Exploring Information for Robotic Tasks  2  performance. This allows us to optimize our systems with respect to the task at hand.  1.1  Information for Robotic Tasks  In recent years researchers have begun to look at the problem of determining what information is required for robotic tasks. Donald [18] has proposed the idea of information invariants to formally address this question. He describes sensory systems as components embedded in the world with some level of calibration or co-designation. These systems can be permuted and compared to system specifications and each other. He proposes that a measure of information invariants may imply a measure of hardness or difficulty and hence a notion of lower bounds in robotics. Others have proposed "RISC" robotics which would use simple, minimal sensors and actuators to achieve complex goals [14]. On the sensor side there is a new press for metric vision, which would apply more rigorous evaluations to the performance of high level vision systems. Current practice with respect to sensor system design for intelligent agents, however, is often to attach a sophisticated existing system to the robot and hope it delivers the desired results. For example, given the most sophisticated vision system available, if such a system cannot return relevant information, it is not useful for furthering the task. Obviously sensors, and the time and computation required to extract relevant data from them, as well as the space and computation required to evaluate internal models, all have a significant cost. Given the scarcity of resources, we need to examine exactly what information is required for a particular task in a particular environment. Ideally, we could identify the optimal or necessary and sufficient information for a robotic system to execute a task with acceptable reliability. Realistically, we can begin by studying the systems we have and analysing how the information they use, in the form of models or sensor readings, affects their performance. For any agent, organic or electro-mechanical, the world consists only of what it can perceive or infer from its perceptions. We often think of robots as acting in the world we perceive, but in fact they live and act in the world of their own perceptions as generated by their suite of sensors. For example robots such as Brooks's Atilla are often touted  Chapter 1. Exploring Information for Robotic Tasks  3  as acting sensibly in a complex world, but their world consists only of "obstacle" or "not obstacle", warm moving object or not, and thus their projection of the world is not really very complex. These robots are successful only so long as this subspace, and their reactions to it, allow them to succeed in their tasks, in the environment where they live. What is really interesting in reactive robots and in nature, is how well such projections of the environment satisfy the execution of task goals. Models and sensors used by robotic systems are only valuable to the extent that they contribute to the performance of the robot's task. We need to answer questions such as "Is the subspace of the world observable by the robot the same as the subspace of the task?" and "Can we transform the task into this subspace?" In fact systems from planning to reactive control are computations on simplified models and they all work as long as the model and computation are adequate to produce or approximate the desired results. One of the defining characteristics of tasks for intelligent agents is their complexity. They are often composed of numerous subgoals all conditioned on preceding events and the current world state. There are many ways to represent a robotic task. Probably the most common descriptions are the symbolic models of STRIPS-like planners, configuration space (C-SPACE) approaches, or methods based on the principles of control systems. Those in the reactive robotics camp have claimed that simple control loops are sufficient to generate intelligent robot behaviours [9]. Advocates of 'sensorless' manipulation claim that all you need is a good world model [42]. Broadly speaking any representation must account for the information (sensed or modelled), actuators and computations (match, search etc.) available to the system and the requirements of its task. There is as yet no accepted formal description of complicated intelligent tasks, particularly in light of the variety of actuators, sensors and computations available. In particular the difficulty and characteristics of obtaining sensor data are not accounted for, except perhaps by a simple uncertainty region. Our current best tool for comparing and evaluating such systems is thus to compare and evaluate their performance. We take our notion of information from the field of Measurement Science [64]. Mea-  Chapter 1. Exploring Information for Robotic Tasks  4  <7 Figure 1.1: A possible stereo vision based part orienter. surement involves extracting information from objects or events in the physical world. The raw numbers recorded by the measurement system are called  data;  they are converted  to information via calibration. Calibration compares data with some agreed measurement standard to assign them common meaning or symbolism. For our purposes then, mation  infor-  is the representation of physical phenomena via standardized symbolism. We will  view sensors as measurement instruments, models are stored information which at some point was obtained by a measurement system.  Knowledge,  in turn, is some interpretation  of a collection of information. Let us consider, for example, the simplified part manipulation system illustrated in figure 1.1, consisting of a robot manipulator arm and a pair of stereo cameras. The task goal is to pick up the part and place it in a known position and orientation. Disparities extracted from a stereo vision system are only data until we apply a calibrated camera model to extract depth points from them. They then become position information which, via model fitting or other localization technique, becomes the knowledge of part position required by the task. Calibration may be more or less precise depending on the number of (world, image) points used to generate the camera model, and the degree to which those points span the workspace. For this example the part orienting task is parameterized by estimated part position p and the calibrated camera model C , which refers data to the robots coordinate frame, as well as the aperture of the gripper g which determines  Chapter 1. Exploring Information for Robotic Tasks  5  how much error in p can be tolerated. If the aperture is large in comparison to the part dimension, estimation of p can be less accurate while still allowing a successful grasp. The performance or success rate S of our orienting system is a function parameterized by these quantities, thus S =  f(p,C,g).  Clearly this example is an extreme simplification in comparison to real robotic systems. Indeed, any real system will be far more complex than any model we can build of it. One of the goals of the framework presented in this thesis is to be able to analyse even very complicated systems, and determine how their many parameters act and interact to affect system performance.  1.1.1  Sensing for Action  Overall there are three main requirements for sensing for robotic tasks: we must know "what-to-sense" to make progress toward the goal; precision  observability  — our sensors  must be able to detect the desired parameters at the required discrimination; and racy  —  accu-  — our information must be calibrated or reconciled to some common frame such as  the workspace of the actuators. Clearly for our vision-based part orienter example, the part position must be observable in order for the arm to manipulate it, position values must be returned with discrimination on the order of a centimetre, and the accuracy must be within a centimetre or two of the true part position. We conjecture that there is information inherent for a robotic problem; in other words, to successfully solve a robotic problem certain information is required to specific precision in order to adequately compute the result. Which information sources provide this fundamental information is a separate  design  issue.  Observability What events in the world need to be sensed, modelled or measured to achieve a particular task goal? In modern linear state space control the concept of observability refers to the ability to identify any state of the system, based on some finite sequence of future sensor readings [25].  This admits specific tests of the matrix formulation of the system to  Chapter  1.  Exploring  Information  for Robotic  Tasks  6  determine whether all states are observable. Of course for many systems we have no tidy set of differential equations, but clearly these systems must be able to distinguish task critical states. The idea of the robot being limited by its sensors and models has however, been addressed by work on task directed sensing [24], planning in recognizable sets [21, 43] and landmark-based navigation [17, 34, 35]. In this context the system parameters which must be observable, and hence are essential to task execution, suggest a minimal information set. A related idea in Measurement Science is problem boundary or the limit of significant outside influences to the measurement system.  No specific techniques are offered to  determine this boundary except for cyclical refinement of the system model. Precision Not only what the agent sees but how clearly it sees is important to achieving a task. Obviously a system such as ROBODOC which is intended to perform hip replacements [32], operates under much more stringent standards of precision than a system in a less critical task. Precision has three aspects:  discriminability  sured quantity which can be observed;  repeatability  is the resolution of changes in a meais the difference between measured  values of the same world value in the short term; and reproducibility  is the difference  between measured values of the same world value in the long term. We will assume that tasks have an inherent precision and complexity. Generally a task requires certain physical differences between the goal state and the world state to be resolved to a level within some specified tolerance. Speed is also critical for sensing for action and is frequently traded off with other performance criteria such as discrimination. Accuracy The work described in this thesis began with an examination of hand-eye coordination for visually guided manipulation. In many instances such coordination takes the form of precise calibration between the camera coordinate frame and the manipulator workspace. We have also seen that in order for measured data to actually become meaningful infor-  Chapter 1. Exploring Information for Robotic Tasks mation it must be referred to some common standard by means of calibration. Whether achieved by an adaptive learning scheme or by rigorous fitting of world values to sensor values, relating sensor readings to the real world is critical to the agents ability to act in that world. 1.1.2  Parameterization, Representation and Features  Complex 'intelligent' robotic systems are often a patchwork of sensors and actuators spread across several architectures. If we want to view real systems in a unified way for purposes of comparison and evaluation, we need a framework in which to describe and analyse them. The study of physical systems typically cycles through three steps: determining minimal model parameterizations; predicting observations using the laws applying to the forward model; and solving for precise model parameters using observations and the inverse problem. For robotic systems the idea of minimal models is a relatively recent, but important step. Viewing these systems as parameterizations with features, thresholds, constants, and constraints as the task level parameters of interest, gives us a unifying representation common to such diverse fields as economics, statistics, control, and inverse theory. It allows the use of well established methods for analysis and comparison. In existing systems task level parameters are often referred to as "features". In the terminology of measurement such features will usually be measurements of type indirect (composed from known relationships between other measurements) or combined (found by solving systems of equations formed from measurements) [52]. For robotic systems there remains much to be done in terms of identifying such issues as how sensing, modelling, actuation, and computation are used and interchanged for particular tasks. Many implementations are possible for any task. If we compare them we may learn something fundamental about the task itself. For industrial applications it may be sufficient to create a framework where we can look at the costs and benefits of various task implementations and recommend the one which is optimal under specified measures. As a step toward these goals we present specific experimental methods for  7  Chapter 1. Exploring Information for Robotic Tasks evaluating and comparing real task implementations, since a better understanding of the systems we have, should give us insight into how such systems can be made optimal.  1.2  Detailed Analysis of Two Information Modalities  There is currently no standard empirical framework for evaluating and comparing robotic task implementations. Donald has introduced the premise that we can reduce or compare resources of one task implementation to another to study information invariants [18]. On a more practical level we will demonstrate detailed experimental analyses of implementations of the same robotic task. The task we have chosen is the part orienting task popular in the literature on sensorless manipulation. Logically, if, as claimed, this work has successfully moved task information entirely to the domain of models, it may give us insight into the information actually in use. A part can be oriented by a number of different devices. Perhaps because of the low dimensionality (resolving only the angle of the part) and hence limited state space of the task, part orienting has become the focus of considerable study.  1.2.1 Part Orienting Part orienting is a problem encountered in automatic assembly. In order to assemble a product in a factory, parts must generally be presented to a work cell in a known position and orientation. The importance of this task in industry is attested to by the variety of devices devised to execute it. A common industrial part manipulator is the vibratory bowl feeder. The bowl feeder contains a collection of randomly oriented parts which as a result of the vibration, move up a track on the interior of the bowl. The physical properties of the parts cause them to be oriented in one of a number of stable resting positions. At the top of the track parts encounter a series of features in the bowl wall and track which cause them to be either reoriented or rejected and dropped back into the bowl [13]. Another part orienting strategy formalized by Peshkin and Sanderson [51] is that of parts on a conveyor belt pushed past a series of specially designed fences of various  8  Chapter 1. Exploring Information for Robotic Tasks  9  Figure 1.2: Parts travel along the belt and are "pushed" by the fences along the way to exit in a desired range of final orientations. orientations and lengths (see figure 1.2). As parts pass the last fence they lie in a single desired orientation. The passive fences with moving conveyor act in a manner similar to an active pusher operating on a statically supported object. Another, more general, manipulation system which is capable of part orienting is a robot arm with one or more cameras similar to our example system. A part is localized by the vision system so that the arm can directly orient it using a simple parameterized pick and place plan. Other proposed techniques include 'squeeze grasps' where a sequence of oriented squeezes by a robot gripper can be used to establish a particular orientation [27, 10] Obviously the task goal is the same for each of these proposed implementations, but the nature of the sensing, models and manipulation differ significantly. Clearly they all entail physical or sensory localization of the. part, and some degree of manipulation. 1.2.2  Framework for Performance-based Evaluation of Information  Our hypothesis is that different task implementations can be described as different feature-based parameterizations of the same problem. Analysis and comparison of these parameterizations tells us something fundamental about the information being used. More specifically, for any proposed task solution it is invaluable to have a clear analysis of system parameters and how they interact with each other and with task success. We have implemented two part orienting systems: a push orienter and a vision-based manipulation system. There are a number of techniques available in the literature which allow us to analyse the requirements of parameter observability, precision and accuracy  Chapter  1.  Exploring  Information  for Robotic  Tasks  10  which characterize task information. These include factorial experiments, sensitivity analysis and analyses which suggest cost/performance metrics.  Factorial Experiments Factorial experiments [33] are applied to systems where a number of parameters or factors may interact to affect system outcome. They are designed to give the researcher information on the effects and interactions of all factors simultaneously while limiting the number of experimental trials required. "Taguchi's method" is a version of such designs advocated for improving manufacturing processes [55]. Generally k factors are tested at each of / levels yielding an l factorial experiment. The theory of experimental design also k  gives us analysis of variance techniques which allow us to determine whether effects observed as a result of setting parameters to various levels are statistically significant. Thus for a system which we hypothesize to have k possibly interacting parameters, we can run specifically designed experiments and use statistical tools to determine whether each factor or interaction has a significant effect on outcome. For us this approach helps answer the question of what to sense: we will devote our resources to sensing and optimizing those parameters with significant impact on task performance. For our example system in figure 1.1, we could test all combinations of the three parameters p, C, and g at each of two levels. Levels can be qualitative as well as quantitative, for example the levels of p might be two different techniques for extracting part position from image disparities in two cameras. Such a 2 level factorial design would allow us to analyse, at least at a coarse level, the significance of each factor and interaction with respect to the observed variance in S. We could then devote our resources to the most significant factors or combinations of factors. Gaining a sense of the importance of factor interactions is useful when tradeoffs occur in complex systems, as they are not always easy to observe without this sort of principled experiment.  Chapter 1. Exploring Information for Robotic Tasks  11  Sensitivity Analysis Sensitivity analysis [31] determines how a model varies with its parameters. Analytical or empirical methods are used to track system outcomes as parameters are modified. This leads us to more evidence for what information is critical to task performance. We apply sensitivity analysis to determine the precision and accuracy required to achieve desired system performance, and to reinforce our hypotheses about which system parameters are most significant. Again referring to the example system, suppose we have determined that the technique designated the "high" level of p significantly outperforms the other method. Suppose also that we have observed that calibration and gripper aperture interact significantly. With this evidence we can simultaneously optimize gripper aperture as well as degree of calibration C with respect to the preferred extraction method, using sensitivity analysis techniques.  Cost/performance Measures When proposing computer algorithms for task solutions it is usual to provide a statement of the computational cost or complexity of the method. We want to provide a similar statement of cost versus performance for the robotic task implementations we describe. A promising candidate is information-based complexity theory (e-complexity) which is a part of analytical or continuous complexity theory; it attempts to determine the intrinsic difficulty of approximating continuous problems, given only expensive, partial, inaccurate information [49]. It determines the computational cost required to achieve a specified uncertainty level e. This provides us with a convenient metric for calibration and accuracy characteristics on a sensori-computational system. Factorial experiments and sensitivity analysis tell us which parameters are of interest and what precision and accuracy are required; the notation and abstractions of e-complexity can then be used to suggest the complexity of these requirements. e-complexity assigns the usual complexity values to the computational parts of an approximation system, but adds a constant c for every sensor evaluation required. For our  Chapter 1. Exploring Information for Robotic Tasks  12  example the calibration data requires a priori sensor evaluations as well as computational model fitting, while the online estimation of p requires obtaining disparities as well as some search process to localize the part based on them. Empirically or analytically we can determine the error e achieved by the system. If nc is the number of sensor values for C and n the number for p then we add n • c + nc • c to the combinatoric complexity p  p  of the system to obtain the e-complexity for the determined e.  Evaluation The above techniques form the basis of our framework for performance based analysis of robotic task information. We want to answer the questions "what should we sense?", "how precisely?" and "with what degree of accuracy?" in order to establish the three basic characteristics of task information. We conclude that intelligent robotic systems are parameterized by measurements which are used online or stored as models. Factorial experiments offer one way of determining what parameters must be observable. Sensitivity analysis can be used to verify the importance of particular parameters and determine the precision and accuracy required to achieve desired task performance. Finally e-complexity allows us to quantify the difficulty of achieving these performance levels, and compare different implementations of the same task. For the two real part orienting systems we have analysed in detail we began by determining a comprehensive parameter set for each system (11 parameters for the push planner, 9 for the vision and manipulation system). The methodology described above was applied based on these parameterizations for each of 3 parts (a triangle, an "L" shape and a "G" shape). Perhaps the most surprising results were the differences between the sets of parameters which proved significant for each part. Even for the same orienting system, the different parts demonstrated different sets of significant parameters and interactions. For the push planner the analysis clearly indicated the increasing complexity of the parts with respect to push orienting. For the vision system analysis turned up important interactions among model parameters, which sensitivity analysis then allowed us to explore specifically.  Chapter 1. Exploring Information for Robotic Tasks  13  Clearly this methodology reveals aspects of task implementations which are not easy to predict in advance, as well as demonstrating a system's strengths and weaknesses. The result is a detailed picture of the information requirements of the particular part orienting task studied, and the way in which information is encoded in various parts of the implemented systems. Rigorous testing and analysis of intelligent robotic systems to determine key system parameters and their effect on performance, is critical to providing and understanding robust task solutions. It provides a necessary first step toward developing a quantitative science of robotics.  1.3  Thesis Plan  Chapter 2 reviews material on task directed sensing, system parameterizations, featurebased models, and system performance evaluation techniques. Chapter 3 presents the basic evaluation methodology for exploring task information for a particular task parameterization, including factorial experiments, sensitivity analysis, information-based complexity. Chapter 4 describes the equipment used for our empirical tests and the details of the push orienter. Chapter 5 describes the fast calibrated stereo system used for our vision-based part orienter. Chapter 6 contains the empirical results of our system evaluations. Chapter 7 analyzes these observations and chapter 8 summarizes the conclusions.  Chapter 2 Information for Robotic Tasks  What characterizes the information used in robotic tasks? Clearly the information available to any system is limited to what can be measured, either  a priori  in the form of  models, or online by its sensors. To clarify this notion of information from measurements, section 2.1 introduces definitions from measurement science. We then propose a standard view of intelligent robotic systems as parameterizations in a physical parameter space of models and data. In section 2.3 we introduce our basic premise that there are three aspects to information for robotic tasks: observability, precision and accuracy. Section 2.3.1 reviews work in task directed sensing including planning in the recognizable sets, sensor planning and information invariants. Frequently the measurements used in sensory and robotic systems are described as "features". Section 2.3.2 reviews the types of features used in biological, sensor and robotic systems. Finally, some formal methods for evaluating the information and performance of robotic systems are discussed.  2.1  Measurement and Information  When we refer to information for tasks we will be referring to information in the sense used in Measurement Science. To make our usage more precise we include the definitions below, following Sydenham et a/.'s "Introduction to Measurement Science" [64]. Measurement only occurs when a measurement system interacts with the physical world. Properties of physical objects or events to be measured are assumed to have a 'true value' which can never actually be known because there is no perfect instrument to determine it.  Definition events  2.1  measurement:  in the physical  extracting  world.  14  information  from  objects  or  Chapter 2. Information for Robotic Tasks  15  A signal passes from the physical world through the measuring system and is recorded in a numerical form as 'data'.  Definition 2.2 calibration: standard  to assign  them  compares  common  data  with some  meaning  or  agreed  measurement  symbolism.  Calibration allows data to be referred to standard units so that their meaning can be used and shared, effectively converting it to information.  Definition 2.3 information: via  standardized  the  representation  of physical  phenomena  symbolism.  'Knowledge' on the other hand is some interpretation of an information set. This notion of information is distinct from that used in the field of information theory. Here we are dealing with measured properties of the physical world compared to a standard unit; in information theory the negative entropy value called information is basically a quantification of the expected reduction in uncertainty obtained from an observed signal [54]. In designing a measurement system we must first determine the properties of interest or 'referents'. In order to implement such a system, referents must be instantiated by 'measurands' or properties that can actually be measured by existing equipment. For example we cannot directly measure the referent 'quality', but we can measure properties of an object which determine our perception of quality. Measuring instruments have various properties used to describe the measurements they make.  Definition 2.4 discrimination: quantity  which  can  be observed  Definition 2.5 repeatability: the same  world  the magnitude  value  by the the  Definition 2.6 reproducibility: of the  same  world  value  over  measured  between  measured  values  of  term  the  the  in a  instrument  difference  in the short  of changes  difference  long  between  measured  values  term  These three properties determine the precision of the measurement device, although precision is not generally assigned some overall numeric value.  Chapter 2. Information for Robotic Tasks Definition 2.7 accuracy: measured  quantity  and  is the  difference  the measurement  16 between returned  the by the  true  value  of a  measuring  de-  vice.  Accuracy of an instrument is determined by comparison of the instrument's measurements with declared reference values, or calibration. Measurements can further be classified by whether they are direct, indirect or combined [52]. Direct measurements are read directly from measuring instruments. Indirect measurements assume the quantity to be measured is a function of several arguments which in turn can be measured. Combined and simultaneous measurements are similar in that they determine the quantity of interest by solving a system of equations where the coefficients and terms are measured values. Simultaneous measurements usually involve quantities of different kinds, and the equations solved reflect relationships in nature. Combined measurements involve quantities of the same kind, and equations based on arbitrary relationships between them.  2.2  Task Level Parameterizations  Tarantola [66] describes the investigation of physical systems as a cyclical process roughly divided into three steps: parameterization, prediction via the forward problem, and system identification via the inverse problem. Parameterization attempts to describe the system with a minimal set of model parameters. In studying the forward problem, the rules or physical laws which govern the system are used to predict values for observable parameters or measurements. For the inverse problem a collection of measurements are used to determine the true model parameters. If we view investigation of robotic tasks in a similar light, the search for minimal fundamental descriptions is a relatively new phenomenon [14, 22, 18]. In the past individual robotic systems for various tasks have been developed in a relatively ad hoc manner, and little effort has been made to compare and evaluate what underlies them. For physical systems, the model parameterization M , is not necessarily unique; it merely places a coordinate frame on the abstract underlying model space N ' . Any point  Chapter 2. Information for Robotic Tasks m  = {mi,..m„}  17  in M represents an instantiation of the parameters, or a model. The  set of observable parameters chosen is not unique either. We can imagine a space of all possible sensor readings for all possible instruments, where a particular choice of sensors and measurements defines a data space D of observable parameters, d = {di, ...,d }. If m  we extend the physical system to include the sensors used we can speak of the physical parameter space P = M x D. In this thesis we view robotic tasks as characterized by this parameter space. Although elements of such a parameterization are not made explicit for most intelligent robotic systems, this is a general and potentially useful way of describing them. Depending on the level of detail we want to model, parameters could range from low level system values such as camera aperture to high level constructs such as object position. In particular however, we are interested in the task level parameters which determine system performance as suggested by the work on task level learning by Aboaf et al. [2]. In the case of sensing for robotic systems we seek to infer some task related aspect of world state based on our model and observed sensor measurements. •  J  1  *  ?  ™  p e P, rh e M  v  (2.1)  '  Our observation is that the task level parameters used by most implemented systems are generally referred to as features.  Parameterizations in the form of feature based  representations vary widely and we have little sense of minimality or optimality achieved by existing systems. Generally however, the operation / involves a search through M for the best explanation of d. A task then can be described as a trajectory through the state/model space inferred from feature observations. This perspective is not new with respect to control theory, but it is a useful way to think of abstracted task level systems. In the case of our example from chapter 1, a set of image disparities I and the calibrated camera model C are used to determine the best estimated part position p to explain the observed disparity values. In other words for C, I £ P, determine the best p 6 M, where M is the set of possible part states.  Chapter  2.  Information  for Robotic  18  Tasks  u(t)  y(t)  x(t)  J  Figure 2.1: Basic dynamic system diagram.  2.3  State Spaces, Observability and Information for Tasks  When designing a robotic system to solve some task we have to decide what must be modelled, sensed or inferred to execute the task. Ideas for determining "what to sense" range from well known systems theory notions of observability of the state space, to high level planners which run wholly without reference to sensor observations. Obviously what the system can "see" limits what it can "do", and what tasks the system is required to perform should therefore drive what it is designed to perceive. In other words is the subspace generated by the agents perceptions adequate for the task? The most precise notion of "what to sense" is that of observability from linear systems theory. It is defined for a system such as that illustrated in figure 2.1. In the discrete case the system is described by the equations: x{n + 1) =  + Tu{n) + Yiw{n)  y(n) = Hx(n)  + Ju  where x[n) is the system state at time n, u is the control input and w is a disturbance. $ is the system matrix, T is the control input matrix, Ti is the plant noise matrix, H is the output matrix and J is the plant direct transmission matrix. The information required for control tasks is denned by the structure of the model which allows formal definitions for observability and reconstruction. The theory offers explicit tests to determine whether current system state is observable (and the dual property of whether the system is controllable), meaning it can be determined by measuring some future output.  Chapter  2.  Information  for Robotic  19  Tasks  Ignoring control and disturbances we can see that  y(n  2/(0)  Hx(0)  y(i)  Hx(l) = #$x(0)  2/(2)  Hx(2) = H$x{l) = H$ x(0)  - 1)  2  = HQ  x(0)  or  H  y(o)  H$  s(0) y(n - 1) The matrix O = {H,  H^ }, n_1  is called the observability matrix. For the system  to be observable, 0 must be nonsingular [25]. Reconstruction is the problem of predicting the state based on previous measurements [15]. Observability of system state captures the notion of being able to detect events relevant to the task, as well as referring them to the correct frame of reference as implied by identifiability of system state. We will use the term observability to suggest this idea of being able to identify critical system states from the measurements available. Figure 2.2 illustrates the idea that no measurement system is infinitely accurate— some set of world points will be mapped to each measurement value. At a higher level, a mobile robot in an office environment may not be able to distinguish one office door, or corridor from another because they "look the same". These are sometimes referred to as perceptual equivalence classes [17], although generally real sensors will not induce a true partitioning on the space of world values. This idea of sets of world points indiscriminable to the system relates to the ideas of discrimination and accuracy in measurement. In general there is also a relationship between the dynamism of the task and environment and the complexity of the observable space required by the agent. This can be seen as a requirement for greater discrimination in time.  Chapter 2. Information for Robotic Tasks  20  Measured Values  [f^ [fj [fjl [fl [fl [fgl 4  5  World Values Figure 2.2: The process of measurement, and/or storing values digitally with finite precision, induces (possibly overlapping) indiscriminable sets [/,•] of world points which are all mapped to the single measurement fi-  A basic premise of this thesis is that information for robotic tasks has three fundamental characteristics: we must determine what to sense, or the observable parameters; and we must achieve a specific level of precision and a certain level of accuracy to achieve the desired level of system performance.  2.3.1  Task Based Perception  The issue of what to sense has become an interesting counterpoint to previous work in computational sensing which attempted to reconstruct the world from sensor values. As more scientists work with robots which have to act in a dynamic environment, the idea of task directed sensing as a carefully allocated resource becomes critical. This in turn brings us back to issues of minimality and optimality for robotic systems. There are three key aspects to sensing for action: 1) determining what information or measurement is required for the robotic system to progress; 2) detectability— can your sensor get the required information at the required precision and required timeliness?; and 3) calibration— or reconciliation of reference frames for sensors and actuators. The first of these three relates to the notion of parameterizing the task model space, since deciding what-to-sense essentially determines what task related aspects of the world must be inferred to perform a task. Detectability affects the data space parameterization, and calibration establishes rules for / of eq. 2.1, which relate observations to world state.  Chapter 2. Information for Robotic Tasks  21  The most common and successful integrated sensing and action systems are of course control systems. Unfortunately these successes are typically limited to finite, linear (or linearized) systems. For complex tasks it would be difficult or impossible to derive a useful dynamical model to which control theory might be applied. It is interesting to note that in control theory the system and environment are treated as a whole in the form of a single model composed of differential equations. Observability of system state indicates referring events to the correct frame of reference as implied by identifiability of system state. Reactive systems have embraced the control model with the claim that it will be sufficient to combine simple control loops in complex environments, to achieve complex behaviour [9]. Current wisdom however suggests that more complex systems will require more retained state and higher level prediction and planning. When such representations are added to the reactive agent however, the notion of integrated sensing and action becomes less well defined. In early reactive systems it was not clear what drove selection of sensory processes, or how they were calibrated. Firby points out that sensing is a scarce resource in robotic systems and should be applied judiciously to relevant aspects of the task at hand [24]. He notes that if you use sensing at run time to distinguish among several world states, you will have to have an alternative plan for each possible state, creating a combinatorial explosion. He sees actuators as acting on the world and sensors as acting on the world model. He does not offer specific rules for what task related observations may be pertinent; these decisions are left to the planner. He introduces the notion that sensed values become obsolete with time. Essentially his planner senses to fill in values of which it is not certain, in order to select which plans to apply. Bajcsy's work on active perception [5] also pointed out the need to direct one's sensing effort toward a particular perceptual goal. Generally, active perception is the study of modelling and control for perception. Specifically it emphasizes intelligent data acquisition using feedback of errors and parameters from the current scene or percept. It requires a good understanding of the sensory devices, algorithms and the goal of perception.  Chapter 2. Information for Robotic Tasks  22  Recently the idea of the virtual sensor has become popular. A virtual sensor describes what we would like to be able to observe during task execution, rather than the real sensor readings available [34, 17] and thus answers the question of what to sense. Virtual sensor readings are constructed from the available real sensor values. This idea has been characterized in more detail in Erdmann's action-based sensor design [21, 22]. Erdmann recommends determining what abstract information is required for the task, then implementing a physical sensor to provide it. His method for selecting the necessary abstract sensors, is to first design an action sequence to achieve the task, then define a progress measure for distance to completion. Progress regions in the state space are then defined as regions where particular actions make progress. The abstract sensor then observes the progress region containing the state of the system and hence recognizes the associated action which will make the system progress toward the goal. "Minimal information" is associated with the progress regions of the action plans which reach the goal quickest. Sensor planning is the job of automatically determining sensor parameters given knowledge of the sensor, environment and the task at hand. The parameters in question can include internal instrument values as well as position and orientation, and even external properties such as the position and brightness of light sources. In most cases it deals with the question of designing sensors for detectability of the required task information. Tarabanis et al [65] describe the numerous constraints which must be considered to satisfy tasks in vision sensor planning. They describe three main approaches including those for recognition and localization of objects, scene reconstruction, and detection of specific object features. Model based recognition systems often use hypothesize-and-test techniques to search the discretized space of possible objects or poses, collecting more data on the scene as new sensor configurations are proposed and tried, until a stopping criterion is satisfied. In scene reconstruction the scene model is built incrementally and sensor configurations are chosen according to criteria intended to fill in unknown parts of the scene until it is fully explored. One such scene reconstruction system is Whaite and Ferrie's autonomous exploration method [73]. It actively looks for viewpoints from which obser-  Chapter 2. Information for Robotic Tasks  23  vations will maximize the accuracy of its internal model. The method seeks to improve its representation by making additional observations. It uses a gaze planning strategy which selects observations to reduce model uncertainty, in other words to improve regions where predictions are least reliable. This is a promising method for exploring an environment, but for more specific tasks what are the key uncertainties which must be reduced by sensing? Sensor planning for scene reconstruction seems mainly to deal with the problem of what to sense for a task requiring a dense world model, e.g. obstacle avoidance may require the most information along a planned path. The sensor planning methods used for detecting features of objects deal mainly with designing sensors that provide the detectability required by the task. They require that the features be visible, in focus and projected at a given size in the image. Generally sensor configurations can be planned offline since objects are often assumed to arrive in known position. Various approaches are possible including generate-and-test search in a discretized space of sensor configurations, or synthesis methods where task constraints are solved analytically for valid parameter values. The large number of parameters necessary to fix optical and geometric configurations of sensors and illuminators, can cause these search problems to become huge, requiring various assumptions to make them viable. The notion of planning in recognizable sets rather than reachable sets is an important one for task directed sensing. Obviously if our knowledge of the world comes from our sensors, our ability to plan in areas we cannot recognize or distinguish may be useless. This idea is echoed in the ideas of Perceptual Equivalence Classes which lump together parts of the world which "look the same" i.e. cannot be distinguished by the sensors available [17, 39] . These theoretical ideas are demonstrated by Mataric's robot which learns its world representation by accumulating sensor readings classed as right walls, left walls, corridors and irregular [43]. These features are used to construct landmarks in a graph representing the environment. Once constructed, the graph can be used to navigate in and recognize mapped regions of the world. Planning in recognizable sets versus the virtual sensor method offer dual approaches to task directed sensing, where one designs plans based on sensor driven representations, and the other designs sensors  Chapter  2.  Information  for Robotic  Tasks  24  based on plan requirements. This reflects the tradeoff between detectability and whatto-sense since the ability of our sensors limits what we sense. This tradeoff has yet to be studied or fully understood. Landmark-based navigation is an interesting demonstration of task directed sensing, where relatively sparse "beacons" distinguish regions of the workspace allowing the robot to localize itself in some internal map. Lazanas and Latombe formalize robot motion planning with landmarks and propose an algorithm which is complete and of polynomial order [35]. They define landmarks as one or more physical features of the workspace which the robot can identify with perfect accuracy within some limited field of influence. This allows perfect control and sensing in the neighbourhood of landmarks. The method's obstacle-free environment and perfect landmark identification are strong assumptions. They make the interesting claim that engineering the environment by constructing landmarks allows tractability in planning. It is unclear what has been traded for planning tractability however, since we have no notion of the complexity of engineering or calibration of the environment. Evidently some of the planning and representational complexity has been encoded in the environment. Information Invariants Donald's work on information invariants attempts to formally address information for robotic tasks and touches on a number of issues in robotic task complexity [18]. Although still somewhat speculative, his goal is to provide a system for comparison of sensoricomputational systems, transformation of one system into another, determining whether a system specification can be satisfied by a particular sensor system, and formal reduction of one system to another. He proposes that a measure of information invariants may imply a measure of hardness or difficulty and hence a notion of lower bounds in robotics. He begins by discussing a manipulation task by two cooperating robots and describes an instance of communication being traded off with sensing. This rather anecdotal comparison prompts him to assign the sensor operations a value equal to the number of bits communicated. Similarly, following tasks require modifications in the path of the leader,  Chapter  2.  Information  for Robotic  2 5  Tasks  when the follower uses only sensory information versus communication. The added travel distance can also be compared to bits communicated. Comparing different parameterizations of the same task to examine fundamental task information is part of the motivation for the comparison framework presented in this thesis. In Donald's framework sensor systems are represented by graphs with components such as transmitters and receivers forming the vertex set, and connections among them as edges. Sensors are said to be  situated  if their vertices are embedded by a mapping into  a configuration space. The fixing of external state or calibration, is an important issue as seen in planning with landmarks. Donald attempts to quantify  calibration  complexity  by the number of components of a sensor system whose relative positions are fixed. He proposes ordering sensor systems on the amount of information gleaned from calibration or external state. The transformations and notation Donald uses are quite formal, but it is unclear that the embedding of a sensor graph in an environment is an adequate description of the power or information it represents. Calibration is certainty important, but there are more aspects to sensors than the relative location of components. 2.3.2  Features and Information  The term "feature" is pervasive in many areas of science and particularly in sensing and robotic manipulation, but it takes on somewhat different meanings in its various uses. Features always summarize and encode key physical properties for the problem at hand. Our premise is that features, as they exist in the literature, are measurements or combinations of measurements acting as task level parameters. In biological systems we read of Treisman's preattentive features which are recorded preattentively in parallel and thus pop-out more noticeably than others [69]. Lederman and Klatzky [36] argue that touch (haptics) can be effectively used for perception and recognition, in the form of purposive hand movements.  The notion of perception as  part of an action system is echoed in work on 'perception-action systems' hypothesized to be holdovers of evolution. Jeannerod proposed visuomotor channels where a visual  Chapter 2. Information for Robotic Tasks  26  stimulus activates a motor program [30]. In multimovement systems the value of sensory information varies with the motor task being performed and the sensory and motor modalities in use [1].  All of these suggest a close relationship between percept and  action. Clearly the features extracted are those which have meaningful covariance with the need to act [38], and hence reflect the task-based parameterization we suggest. In computational sensing features have more often been chosen to meet some intermediate goal of a general purpose sensor system than the outcome of an overall robotic system, although of course these are not exclusive. Early in the history of computational vision, pattern recognition classified images using a number of measurements made on the image or event, this vector of measurements or features denned a point in a feature space. Image patterns belonging to the same class were assumed to cluster in the same region of feature space, while distinct classes would be separated [45]. In the case of locating 3-D objects in 2-D images simple measurements tend to be dependent on lighting and on the object's pose, so classification without reconstructing object properties will not work well [29]. Selecting or constructing suitable features on which to perform classification is the key to pattern recognition. Features generally provide a reduction in dimensionality of the problem, and if they are well chosen the information which is discarded is irrelevant to the problem to be solved. Principal component analysis was used to transform the bases of the measurements into a space where combinations of features provided better discrimination among classes. Recently this notion has resurfaced in modal (as in mechanical modes of a system) correspondence and recognition techniques. Turk's eigenfaces [71] treat whole N x N images of human faces as huge vectors of length N and basically use principal 2  component analysis to transform images into points in a feature space. Another modal approach is Shapiro and Brady's feature correspondence method [59]. They determine correspondences between a pair of images while retaining a shape metric by computing the eigenvectors of symmetric matrices Hi, which contain the Gaussian weighted distances between features within each image i. Matrices of the eigenvectors for each image are compared to determine which features correspond.  Chapter  2.  Information  for Robotic  Tasks  27  Recently the notion of selecting features which act as indices into a set of models or hypothesized image events has gained popularity [62]. Ideally each feature would generate one (correct) hypothesis as to the contents of the image, much in the way feature vectors were meant to classify images. In our parameterization model this would make f(p) an indexing operation which would give us a (hopefully small) set of possible task states m;. From a more theoretical point of view Richards and Jepson present definitions for robust visual features which include both likelihood and context in robust statistical inference from images [53]. They base their arguments on the idea that good features reflect non-accidental configurations [37], which are typical of the world and yet informative, again preferring features which generate relatively few but likely hypotheses about the world state. Feddema et al. describe selection criteria for image features to be used directly as the feedback signal in a resolved motion rate control scheme which allows an eye-in-hand robot to track a workpiece as it moves [23]. There are two aspects to feature selection: image recognition and control, so a set of quantitative measures of various extraction and control issues is proposed. Feature extraction measures include: uniqueness, noise resistance, expense and feature set completeness. Control measures include: observability, controllability and control's sensitivity to noise. A similar system is described by Skaar et al. based on the premise that some tasks can be achieved by producing particular relationships among visual cues in camera space, thereby effecting the task in physical space [60]. kinematics  The goal is to learn  camera  space  such that a manipulator can be driven just as if real physical space kinematics  were known. To this end the relationship between joint angles and visual cues must be established, methods of predicting the motion of cues in the image, and methods for driving the cues to required configurations are constructed. Robotics literature has its own history of features, which includes some sensory events which index world state as with Eberman's contact features [20]. Contact features are essentially peaks in a temporal sequence of force measurements representing impacts with the environment. Given a model of our motion and where we expect to be, likely world  Chapter 2. Information for Robotic Tasks  28  positions for the impact can be determined. Systems employing force sensors within control loops to generate passive behaviour such as compliance, can still be referred to as sensorless. Kinematic features also play an important role in fine motion activities such as pushing [41]. Kinematic pairs define how objects can move relative to each other. The motion of a pushed object is also determined by the coefficient of friction between the object and the pusher, and the centre of mass of the object. All of these values parameterize the state transitions in the pushing system, and hence allow us to infer world state after a push is executed. Pushing operations underly much of the work on sensorless manipulation [27, 51]. Brost [11] and Caine [13] use a construct called a "contact superset" to enumerate all possible kinematic interactions between a pair of polygons in contact. Caine uses the contact superset with friction and support relations to design features for vibratory bowl feeders. These features are essentially shapes installed in the side of the bowl which cause correctly oriented parts to pass and incorrectly oriented parts to fall back into the bin. What does the work described above tell us about features, and hence parameterizations, for robotic tasks? We know that in biological systems as well as many robotic systems the information and representations used depend on the task to be performed, whether that is navigation, manipulation or survival in a hostile environment. The features we require reduce the volume of information to be processed, while capturing something fundamental to acting in the environment. For the robot programmer we need to determine what features are useful for which types of tasks, and potentially what are the minimal set of features and minimal precision of measurement to consider to achieve this task.  2.4 2.4.1  Evaluating Task Performance Complexity of Robotic Tasks  A number of researchers have made an effort to analyse the quintessential aspects of robotic tasks. From a computational complexity point of view several aspects of robotics  Chapter 2. Information for Robotic Tasks  29  tasks have been explored. Natarajan and others [48, 61] have attempted to measure the complexity of assembly tasks and workpieces in  or independent motions. Navi-  hands  gational issues are addressed in the context of graph searching, by determining which augmentations to standard automata such as pebbles, counters or additional automata, allow them to search mazes [6]. Others have studied the complexity of configuration space fine motion planning. Brafman et al. [8] have studied sound and complete termination conditions with uncertainty in control and sensing. Andeen [3] proposes a "Science of Assembly" which measures the difficulty, complexity, uncertainty and variance of the task. Complexity is a generalized displacement determined by the number of robot primitives required to perform a task. Difficulty entails the weight and size of the parts and the forces required to mate them, and is measured by the time required to perform the task. As previously described, for the job of designing and describing information sources, recent work by Donald [18] seeks to study  information  invariants  or information necessary  to perform a task, as a measure of task difficulty. He develops methods to compare and reduce one sensor system to another, when both generate the necessary information. Mackworth [40] proposes a set of  adequacy  criteria  for  visual  knowledge  representation  which outlines some important requirements for sensor based information sources. There have also recently been calls to implement robotic tasks with only simple sensors and actuators rather than the vision systems and dexterous hands of the past [14], although the method for distributing complex tasks over simple sensor and actuator components appears to be lacking. In a 1984 paper Sanderson proposed an information theoretic measure for assembly tasks he called  parts  entropy  [57]. The measure is based on uncertainty in part position  and orientation and reflects the view that assembly is the convergence of parts as they become more and more precisely oriented until they reach mating tolerances. The basic premise is that as sensing and manipulation actions are applied to the parts of an assembly, the set of possible states for each part is reduced. For some probability distribution on states, the entropy of the parts is correspondingly reduced. For assembly problems  Chapter 2. Information for Robotic Tasks  30  involving several parts the joint entropy of the set of parts is used. The goal of the task is assumed to be to decrease the total entropy for the set. Sanderson proposes that the changing entropy for each step of the task be plotted against time to compare the entropy reduction of various task strategies. This idea, although neglected for some time has recently gained popularity [14], however part entropy speaks only to the precision of knowledge about a part's state. It does not relate information to success in achieving task goals.  2.4.2  Part Orienting  Given Donald's premise that we can reduce or compare resources of one task implementation to another to study information invariants, we will examine and compare successful implementations of a particular task. The task we have chosen is the part orienting task popular in the literature on sensorless manipulation. Logically, if, as claimed, this work has successfully moved task information entirely to the domain of models [42], it may give us insight into the information actually in use. A part can be oriented by a number of different devices. Perhaps because of the low dimensionality (resolving only the angle of the part) and hence limited state space of the task, part orienting has become the focus of considerable study. Goldberg and Mason [27] describe what they call Bayesian Grasping, using a frictionless gripper. They develop a diameter function for each 2-D object, which plots the diameter of the object against its orientation relative to the gripper. When squeezed from any initial orientation the object will tend to rotate to a local minimum in its diameter function. The object orientations consistent with these minima will have some predictable probability after a squeeze has been executed. Obviously these squeeze operations reduce the uncertainty regarding object orientation, and a sequence of oriented squeezes can be used to place the object in a desired orientation. Sequences of squeezes are computed by starting with an initial grasp for an unknown initial orientation, and searching a tree of possible sequences from the set of possible actions, until sufficient probability for some orientation is reached. The authors propose  Chapter 2. Information for Robotic Tasks  31  ranking various plans based on cost functions such as distance from goal or time (steps) required. Brost [10] also studied the squeeze grasp but addressed the stability of push operations over the operation space as well as including uncertainties in object position, friction and gripper orientation. He developed a sequence of "maps" indicating the behaviour of the part in various regions of the parameter space. Another part orienting strategy formalized by Peshkin and Sanderson [51] is that of parts on a conveyor belt pushed past a series of specially designed fences of various orientations and lengths (see Chapter 1, figure 1.2).  The passive fences with moving  conveyor act in a manner similar to an active pusher operating on a statically supported object. As parts pass the last fence they lie in a single desired orientation. A common industrial part manipulator is the vibratory bowl feeder. The bowl feeder contains a collection of randomly oriented parts which as a result of the vibration, move up a track on the interior of the bowl. The physical properties of the parts cause them to be oriented in one of a number of stable resting positions. At the top of the track parts encounter a series of "features" in the bowl wall and track which cause them to be either reoriented or rejected and dropped back into the bowl [13]. The interactions of these features with parts has been formalized in configuration space by Caine [13] in the form of contact supersets, which capture all interactions between the corner and surface features of the part and the bowl. By superimposing support and friction information on these configuration space constructs, Caine can determine bowl features which allow certain part orientations to pass by, while others are rejected. Another, more general, manipulation system which is capable of part orienting is a robot arm with one or more cameras. A part is localized by the vision system so that the arm can directly orient it using simple parameterized pick and place plan. Obviously the task goal is the same for each of these proposed implementations, but the nature of the sensing, models and manipulation differ significantly.  Clearly they  all entail physical or sensory localization of the part, and some degree of manipulation.  Chapter 2. Information for Robotic Tasks  32  Donald's work suggests we can learn something fundamental about the task itself through comparing implementations. His reductions, however, tend to apply to sensors that return only a few bits of information. We need methods to make meaningful comparisons of disparate systems with potentially rich sensor information.  2.5  Summary  Information for robotic tasks takes the form of measurements stored as models or made online. Tasks are parameterized by the information or measurements available to the system. The three basic characteristics of information for robotic tasks are: which parameters must be observed, how precisely they must be measured, and how accurately they must be measured. This idea has been demonstrated by systems from control theory through sensor planning and task planning in the recognizable sets. Researchers have also attempted to capture the effects of measurement precision with descriptions of perceptual equivalence classes. In this chapter we have described the wide variety of features or task level measurements used in biology, sensing and robotics, and efforts by authors to characterize good or task related features. We have described a few of the formal methods for describing robotic tasks including information invariants and parts entropy. None seems precisely suited to formalizing task information in the paradigm we have laid out in this chapter. One way to look for an appropriate model is to experiment with implemented tasks and find methods to explicate their information requirements. In the next chapter we will present such an empirical framework for discerning task information. The implemented task is the part orienting task we have described. The advantage of this simple manipulation task is that it can be implemented in very different ways. This gives us the opportunity to compare our evaluations of the information used in different implementations and thus examine the information required by the underlying or ideal task.  Chapter 3 Evaluating Task Parameterizations  Any task implementation is merely an approximation, since we do not know the ideal underlying task function, and because we construct implementations based on values sampled from the physical world. We want to identify the information space of a robotic task implementation in order to optimize the resources we devote to measurement and modelling. To achieve this we propose a framework where we first discover which parameters are statistically significant to task performance, and then determine the precision and accuracy required in those parameters to minimize cost while achieving task performance goals. The view of robotic systems we have argued for in previous chapters is one where task implementations are parameterized by measurements. These measurements are either performed  a priori  and stored in the form of world or object models, or acquired  as the task progresses. In what follows we will include other parameters which affect measurement quality, such as image window size in our stereo camera implementation, since it is the task information which interests us. They are parameters of the measurement process rather than measurements per se. Figure 3.1 depicts this general view of  Parameters  Task Implementation <D(f)  Task Outcome Ps  Figure 3.1: The details of the many components of a task implementation may be difficult and time consuming to analyse. We treat each task implementation as a black box into which we feed a number of measurements and other parameters to generate the performance level, or task outcome.  33  Chapter  3.  Evaluating  Task  Parameterizations  34  a complex task implementation treated as a black box driven by input parameters to produce task outcomes. This looks like a system identification problem but we are less interested in the details of the internal workings of the system than in how the quality of the parameters affect its performance. From an empiricist's point of view we have a number of tools to investigate such systems and it is just such an experimental exploration which we describe here. The task outcomes we will use are actually binomial probabilities of success, estimated in the usual way, given an evaluation function S returning 1 for success and 0 for failure, for the system $(/). The probability of success (p ) over n trials is estimated as follows: s  n  We will also, in some cases, use a 'distance to goal' value where it is illustrative of how the system is failing the task requirements. In many of our experiments we are also concerned with the standard deviation of our outcome, a$, which is known to be [44, p.297]: 2  a  p  =  PQ  =  P(! ~P)  n  n  '  (for the binomial distribution q = 1 — p). In general we will not know the true value of p, and so will estimate o~p using  4= ~ t  M  -  (3-2)  We have defined the three critical aspects of task information: observability, precision and calibration/accuracy. In the following sections we will introduce a framework combining various techniques to analyse and determine how these properties affect task performance, and therefore how we can tune our measurement processes to improve task performance.  3.1  Identifying the Task Subspace  When we refer to observability we mean the ability to observe or infer the system states of interest from available measurements. Of course we would like to find a minimal parameter set for any system, but without knowing the system model we cannot directly  Chapter 3. Evaluating Task Parameterizations  35  determine the minimal parameterization. In fact intelligent robotic tasks are not often subjected to analyses aimed at minimization or optimization for a particular implementation. We must begin with some empirical exploration of the response of the system to variations in its parameters. A number of methods are presented in the following sections including factorial experiments, sensitivity analysis and e-complexity . In the experiments in chapter 6 we will demonstrate the value of each of these approaches. As we will see all of these methods perform a type of sensitivity analysis, varying parameters under differing criteria and evaluating performance in a number of ways. Later in this chapter we propose an evaluation process, which is a hybrid of these methods and which is specifically designed for our stated goals.  3.1.1  Design of Experiments  Statistical methods for the design of experiments are intended to counteract variability in observations and reduce experimental error. Generally we have one or more 'factors', or independently controlled experimental variables, which can be set to various intensities or levels. Factors may be 'qualitative' such as different edgel grouping methods, or 'quantitative', such as the value of a used by a particular edge detector. A particular combination of factor levels is referred to as a 'treatment'. The outcomes of the system under study are called 'responses' or observations [44]. Experiments can be structured in various ways determined by the selection of the objects which are to be measured under various experimental conditions or 'experimental units', and the way treatments are applied to them. A completely randomized design involves selecting independent random samples for comparison, from a set of p populations to which differing treatments have been applied. Noise created by nuisance variables not under our control can be nullified by making the experimental units as homogeneous as possible. Randomized block designs for experiments with p treatments, consist of b blocks each containing p experimental units to which the treatments are applied in a randomly permuted order. Other block designs include the Latin Square which allows blocking in two directions by requiring & p x p array of experimental units. Each row (or column)  Chapter  3.  Evaluating  Task  Parameterizations  36  has a distinct permutation of treatments from any other row (or column). The general formulation for such experiments for p treatments and b blocks is  Yij — fJ- 4" 7"t ~f" Pj ~\~ ijt  /Q Q\  e  i = 1,2,...*, j = 1,2,...6.  1  '  }  In other words the observation Yij is composed of its underlying mean /J plus the treatment effects Ti plus the blocking effects /3j plus a normally distributed random error e^. The null hypothesis for such experiments is that all T; are zero and all /3j are zero, implying that the mean for all treatment populations is the same. We can then use analysis of variance techniques to accept or reject these hypotheses [56]. Analysis of Variance The experimental designs we will use depend on the fundamental methods of analysis of variance (ANOVA). These are used to determine which are the important factors, and how they affect the system outcomes and interact with each other. The variance of a sample is the average of the squared deviations of the individual measurements about the sample mean y or s~ 8=  for the sample  yi, y  viations  SS  Total  treatments  (SST)  2  , y  n  1  Analysis of variance divides the total sum of squared de-  .  into partitions attributed to the sum of squares for blocks and error  (SSE).  Total  SS  = SSB  = E L E L G f o - yf  + SST  +  SSE  = E L E L vl -  CF  _ (total of all observations) _ (ELi EJ=I E*=I ^'J) 2  pJ2l (B -y)  2  SSB  =  SST  = bT: (T  1  P  J=1  SSE  =  l  3  = Total  -  SS  y)  2  -  ^ ^ - - C F  = ^f^  -  1  SSB  -  SST  CF  _  2  (SSB),  Chapter  3.  where  Evaluating  Task  Parameterizations  37  is the correction factor for the mean, B{ and 5, are the total and average for  block i, and Tj and Tj are the total and average for treatment j. If we assume, as above, that blocking and treatments have no effect on the response, then each part can be divided by an appropriate constant to generate an unbiased estimator sl of the variance of e the experimental error. These 'sums of squares' (SS) divided by their degrees of freedom (see table 3.1 for usual degrees of freedom (dof)) are referred to as 'mean squares' (MS), and are the estimates of o~\, for the null hypothesis that the means of all block or all treatment populations are the same. sl = MSE  - f ^  =  MST  = fff  MSB  = f § f  (  3  -  4  )  We can think of this two way classification into blocks and treatments as a two factor experiment. The F test is normally used to compare variances S\ and S2, where the F value is defined as 02 F =  ^ s  r  Since MST and MSB are only estimators of al, given that all blocks or treatments have the same mean, these estimators will grow large if this is not the case. Thus we can test whether the treatment or block means are all equal using the F test to reject unreasonably large values for type I error probability a. F values can be selected from a  standard tables according to the desired a and the numerator and denominator degrees of freedom.  MST MSE , F =  HQ : Li\ = Hi = ... = yU ,for p treatments  jjfff,  p  HQ : LII  —  Hi = ... =  ^b,for b blocks  reject H when F > F 0  a  Note that the F test with 1 dof in the numerator and n dof in the denominator is equivalent to the t test at n dof, although F = t because of the structure of the test 2  a  statistics.  a  Chapter 3. Evaluating Task Parameterizations Source Blocks Treatments  dof b- 1 P - 1  Error Total  n  —  b — p+  n—  1  Sum of Squares  38 Mean Squares  F  SSB  MSB MSE MST MSE  SSB  6-1  SST p-1  SST  1  SSE  MSE  Total SS  Table 3.1: Standard analysis of variance table. ANOVA data are generally tabulated in the form illustrated in table 3.1.  These  methods are well known and thus their description here is only cursory. For more detailed explanations the reader is directed to any introductory Statistics text [44].  Factorial Experiments Factorial experiments apply treatments in a structured manner to extract as much information about the effects and interactions of factors as possible. In the general case, for a complete factorial design we select some number of levels /; for each of k factors and test all possible combinations of all levels of the various factors. Such an arrangement is called an /i x /  x 2  ••• x 4 factorial design. If all li = I2 = ... = Ik = I then the design  is referred to as an l symmetrical factorial experiment. Commonly 2 experiments are k  k  used, where each of k factors are tested at two levels; these are the most easily explained and we shall describe them in what follows [7, 63]. We are interested in testing the effect of each factor individually, or its main effect, and the interactions among factors, or interaction effects. In 2 experiments each of the k  2 — 1 effects has one degree of freedom. These effects are associated with mutually k  orthogonal contrasts among the treatment responses. A 'contrast' is a combination of treatment yields, where the coefficients are ± 1 and sum to zero. The main effect of a factor is computed from a set of observed responses as follows  Main Effect =  /average effect\ /average effect\ while factor — while factor \ is high / V is low /  Various indexing schemes for determining how to combine factors to compute the desired contrasts exist. The usual method is to associate each treatment combination  Chapter 3. Evaluating Task Parameterizations response A B C AB AC BC ABC  1 -1 -1 -1 1 1 1 -1  c -1 -1 1 1 -1 -1 1  b  be  -1 1 -1 -1 1 -1 1  -1 1 1 -1 -1 1 -1  39 a 1 -1 -1 -1 -1 1 1  ac 1 -1 1 -1 1 -1 -1  ab  abc  1 1 -1 1 -1 -1 -1  1 1 1 1 1 1 1  Table 3.2: Coefficients for computing effects for 2 factorial experiment. 3  with a binary string of length k where a zero in the zth position implies the zth factor is at its low level in the current treatment and a one implies it is high. The main effect for any factor i then, will be the average of all responses with a one in the zth index position minus the average of those with a zero in that position. For a 2 experiment we have the k  set of trial indices formed by the k digit binary numbers from 0 to 2 — 1. All observations k  are used to compute all effects.  We can tabulate ± 1 coefficients for these responses  according to how they are combined to compute main and interaction effects. Coefficients for computing interaction effects can be generated by elementwise multiplication of the coefficients for the main effects of the factors involved in the interaction [16]. For example for a 2 experiment for factors A, B and C, we can construct the table of coefficients 3  shown in table 3.2. The treatment combinations are frequently indicated by sequences such as a, ab, be etc, where the lower case factor label appears only when the treatment includes the high level of that factor. If the factor is at its low level its label is absent. The 000 case where all factors are low is labelled 1. Upper case letters are used to label the main and interaction effects estimated from these experimental responses. Using table 3.2 we can compute the main effect for A  A=  -(a  + ac + ab + abc — be — b — c  — 1),  or for the interaction effect BC  BC = -(1 — c — 4  b + bc + a — ac — ab +  abc)  The expressions are multiplied by | because the average while high (terms with positive  Chapter  3.  Evaluating  Task  Parameterizations  (abc + be)  (ab + b)  40  (a + 1)  (ac + c)  Figure 3.2: The interaction effect BC is found by computing the effect of C (Chi — Cio) while B is high, minus the effect of C while B is low. coefficients) and average while low (terms with negative coefficients) each contain four terms. For the interaction effect we can intuitively think of the effect of C while B is high minus the effect of C while B is low as illustrated in the tree structure in figure 3.2. Carrying through the signs at the nodes we can see the expression is the same as that above. One mechanical way of generating the expression for the combination of observations for an effect X is to expand: X = ^ (a±l)(6±l)(c±l)... T  where a ' —' appears in a term if that factor is participating in the effect X and a '+' appears otherwise. Thus: BC  = ^ ( a + l)(b  - l)(c - 1) =  ^(abc  + be -  ac -  c -  ab -  b+ a  + 1).  Again the expression is the same. As with previously described experiments, we begin with the null hypothesis that the various factors and interactions between them have no effect on observed responses. Referring to equation 3.3, the treatments Tj are the various treatments proscribed by the factor levels, for example Ahi versus A\ is one partition of the observed samples. The 0  sums of squares for the contrasts used to estimate effects, have a x distribution with 1 dof 2  Chapter 3. Evaluating Task Parameterizations  41  (degrees of freedom are equal to one less than the number of factor levels, i.e. equal to the number of comparisons available) for the 2 case, we therefore can construct an ANOVA k  table for our complete factorial example. Using the methods described previously we can determine whether our experiments provide sufficient evidence to reject the null hypothesis. Instead of just two factors, blocks and treatments, we now have all the factors and interactions of interest resulting from our applied treatment conditions [56, 55]. Every 'effect' described above is actually the difference between average observations for two treatment levels spanning the entire set of experimental observations. Just as blocks and treatments are different groupings of the same samples, the observations for the various treatment levels are combined differently for each factor and interaction, according to the coefficients described above.  As we saw in equation 3.4, MST  = ^f^-, where t  is the number of degrees of freedom for treatment, thus in table 3.3 each effect has MST =  = SST. Naturally, for larger experiments involving more factors we may  not wish to consider all the factors and interactions in our ANOVA, particularly when we see they have low significance with respect to the F test. To 'pool' a factor we simply combine its sum of squares with SSE, adding its degree of freedom to the error degrees of freedom. Thus to pool a factor A, we update MSE = (SSE + SSA)/(ed j + a<f/); of 0  0  course we must update our F values accordingly since each has MSE as its denominator. In this way we can identify which parameters, and interactions among parameters, make significant contributions to system outcomes. Fractional factorial experiments take advantage of the redundancy available in the set of observations to allow the experimenter to make fewer runs in the initial stages of his investigation. This is particularly important when the number of factors tested is large and hence the number of trials required by a complete design is huge. In this technique a subset of the factor/level combinations is used. Generally the selection of the subset is done such that higher order interactions are confounded or aliased to main effects or low order interactions, under the assumption that interactions among many factors are less likely to be significant [7, 16]. For example if we have 5 factors (A, B,C, D, E) the  Chapter  3.  Evaluating  Source Blocks Replicates Treatments A B C AB AC BC ABC  Error Total  Task  dof r  Sum of Squares  r  1 1 1 1 1 1 1  42  Parameterizations  Mean Squares  SSR  SSR  MSR  r  MSE  SS  SS  A  A  MSE SS  SS  R  B  MSE SS  SS  C  MSE S S  C  2 - 1=7 3  SSAB  ss  B  A C  MSE  SSRC  SSBC  MSE SS ,c  SSABC 3  A  MSE  SSAC  r(2 - 1) = 7r r2 - 1 = 8r - 1  F  AF  MSE  SSE  MSE  3  Table 3.3: An example of the ANOVA table constructed to analyse factor effects in a factorial experiment. total combination effect  ABCDE  is not likely to be significant compared to individual  factors or smaller combinations. The interactions which are confounded in determining the fraction of trials, are referred to as the identity group of interactions. The resolution of a fractional factorial design is the number R of factors involved in the smallest interaction in this identity group [16]. For example for factors  ABCDE  / = -ABC  we can select an identity relationship = -ADE  =  +BCDE  to generate a | | = | fractional factorial experiment. We can extend the idea of multiplying the signed coefficients of main effects to get interaction effects, to multiplying interactions (substituting 1 for any squared value) to determine confounding effects. For example in the identity above, the set of treatments selected for the experiment are those occurring with negative coefficients in the effect computation ABC and negative coefficients in ADE. Multiplying these two we get the positive coefficients from the interaction BCDE.  Given this particular | of the full set of treatments, any effect or interaction  X  cannot be distinguished from its generalized interaction (multiplication) with any member of the identity relationship. In other words if you eliminate some of the treatment  Chapter 3. Evaluating Task Parameterizations  43  outcomes on which effect computations are based some effects will inevitably become indistinguishable. For any choice of the identity relationship we can generate the resulting confounding relationships via this process of computing generalized interactions with the identity [33].  Response Surfaces Response surface methods attempt to model interactions among multiple inputs and multiple outputs. First a number of sample trials are made possibly using a factorial design. The local surface slope is estimated using a linear model, then steepest ascent (descent) is used to locate a maximal region. Second order techniques are then applied to model the maximal region. The method proceeds sequentially using information gained at each step to inform the next [7]. Response surfaces allow us to examine regions of interest in the space, since robotic tasks are localized only to regions where the robot happens to be. This type of optimization is helpful in setting the normal or preferred operating conditions for a system.  3.2  Precision and Accuracy  Having established the parameters or factors of interest via factorial experiments, we would now like to determine just how sensitive the system is to these values, and in particular what level of precision and calibration are required to achieve the desired system performance (success rate). Sensitivity analysis provides us with insight into how the precision and accuracy in system information affect performance, for the parameters of interest.  As indicated in the definitions of section 2.1 precision entails notions of  discrimination and repeatability, whereas accuracy is the difference between world and measured values as determined by calibration.  3.2.1  Sensitivity Analysis  Sensitivity analysis is a well known and widely used method for determining how a model varies with its parameters [31]. The most sensitive parameters are those with greatest  Chapter 3. Evaluating Task Parameterizations  44  effect on the model's outcome and it is critical to identify them. It offers a quantitative means of determining the significance of some measurement used by the system, and hence gives a type of quantitative value of information. If an analytical description is not available, or the system is extremely complicated or lacks continuous derivatives, then empirical sensitivity methods are applied. This is the case with most of the complex robotic systems we are interested in. The empirical approach to sensitivity analysis generally involves varying parameters and observing changes in system behavior. The direct observation method simply sets different parameter values for each run of the system. Numerous observations can be used to estimate partial derivatives about a Normal Operating Point (NOP), generating a "response surface" for the system relative to the parameters varied. Parameters with known or assumed dependencies could be varied simultaneously to explore their relationship. We are interested in testing the system sensitivity to the precision and accuracy of its measurements. Let us assume we have a measurement function N(f), and that we can vary its discrimination and observe the effects on system outcome. We can further vary its accuracy and repeatability. If we have in mind a desired level of system performance we can then determine the acceptable bounds on these precision and accuracy characteristics for the systems under analysis. The sinusoidal variation of parameters is a more sophisticated empirical technique which addresses the combinatorial explosion in the number of required evaluations for systems with many parameters and many parameter interactions. The general principle is that if parameters are varied sinusoidally, at distinct frequencies, during a single run of the system, the sensitivity and interactions among many parameters can be determined all at once. The method requires first running the system at the NOP, then with sinusoidally modulated parameters. The power spectra of these two output sequences are used to form a spectral signal to noise ratio at each frequency [58]. Peaks in this set of ratios occur at the frequencies at which the different parameters were modulated as well as at frequencies related to products among parameters or interactions. Depending on the choice of modulation frequencies relative to system behaviour, the magnitude of these  Chapter 3.  Evaluating Task Parameterizations  45  response peaks helps estimate the importance of the parameter or interaction. Sinusoidal variation can provide a starting point for direct observation experiments.  3.2.2  Measuring Cost versus Performance  Sensitivity analysis gives us an estimate of the e error acceptable to the system, in order to perform at required levels. Information-based complexity theory suggests some tools to determine the computational cost to achieve the specified uncertainty level e. We will employ the notation and abstraction of e-complexity in the form of a measure on the cost of our implemented systems, to achieve this required error level. e-complexity is specified for real, normed, linear spaces T and Q, so the {success, fail} outcomes we have used until this point are not really appropriate. We will instead use outcome states, such as part location, for this analysis. The problem is posed for an ideal task implementation S which maps our input state parameters to goal locations G:  Again we have the measurement operator N : T —> $R which gives us the values repren  senting / £ T', but because they are of finite dimension, N(f) actually divides T into equivalence classes. We want to determine how the equivalence classes generated by the discrimination of the measurement process map to the outcome space. This is the radius of information, r(N) which describes the effect of the information operator on the uncertainty in approximating S. If [/] is the set of problem elements from T, which iV maps to the same equivalence class as / , then r(N,f) is the radius of S'([/]) in Q. r(N) is defined as supj r{r(N, /)}, giving a lower bound on how well any approximation for eJ  S using ./V can hope to perform. We can also use r to look at the value of information in an information theoretic sense related to the notion of mutual information, V(N) = log (^°°^°] )• r (0) is the ave  2  case where no information is available [49]. This can be compared to our preceding sensitivity analysis. In the case of evaluating and comparing task implementations we have some implementation $ : N(f) —> Q which approximates S. Algorithms can be evaluated under  Chapter 3. Evaluating Task Parameterizations  46  various criteria for optimality. In this framework worst case error for $ is defined as e($,AT) = sup r{||5(/) - $(AT(/))||}. /€J  $ is said to be optimal for S and N when  e($, N) — r(N). Average case analysis is also possible where a probability measure on T determines how samples are selected for the approximation. One now wants to minimize the expectation with respect to this probability measure  E(\\S(f)  — $(./V(/))||).  We can assign an e-complexity value only if we look at the details of the computations involved, thus we will no longer view the systems as black boxes. A model of computation is required in a complexity framework to determine the available operations and their associated costs. For example Packel and Traub [49] use a model where infinitely precise real numbers are the elements, the information operations N(f) have a dominant cost of c units and arithmetical and comparison operations in Q are exact and have a 1 unit cost. Using such a model the computational cost for a particular $ can be computed. For a specified error e, the minimal worst case cost of computing an approximation for S  is  A pair (N, $) which satisfy this complexity are optimal for e. The e-complexity ideas of including the cost of information operations and explicitly accounting for inaccuracy and noise in measurements are very appealing for evaluating robotic systems. Like other complexity formalisms it gives us a single uniform counting scheme, but in this case for information as well as computation. Thus e-complexity suggests a metric for comparing disparate robotic implementations of the same task, with respect to measurement accuracy and computation.  3.3  Comparing Implementations  The framework we present here allows us to analyse and tune the information used in implemented tasks, and with cost metrics similar to e-complexity we can actually compare different implementations of the same task. This allows us to examine fundamental task information.  Chapter 3. Evaluating Task Parameterizations 3.3.1  47  Part Orienting Systems  We will demonstrate our comparisons on two implemented part orienting systems. One is a sensorless push-orienter modelled on Peshkin's system [51]. The second is a stereo vision based system which localizes the part in three space, then manipulates it based on the coordinates provided by the vision system. Both systems are summarized briefly below and described in detail in chapters 4 and 5.  Push Orienting Peshkin [51] describes a planning method for generating sequences of manipulations to transform a part from an unknown initial orientation to a desired final orientation. His work is based on the design of fence feeders, but can equally well apply to a robot arm holding a fence and applying similar pushing operations. We have implemented a version of the latter. The planner is based on a construct called the configuration map, a matrix which groups discretized initial orientations according to the final orientations which result after a push at a particular fence angle a. Final orientations are determined by the corner of the part contacting, then rotating to align with the fence. Rotation direction is determined by the friction between part and fence, and the centre of mass of the part [41]. For some discretization of the full range of fence angles, these maps are computed. Planning is achieved by a search of the tree of all push sequences with pruning.  Visual Part Orienting A second system for part orienting uses a stereo pair of cameras, calibrated to a robot arm's workspace to localize the parts to be positioned. A high speed pipeline processor computes the zero crossings of XJ G for regions of both images. The calibration mapping 2  allows selection of these subwindows and a fixed range of disparities based on the robot's workspace region of interest. Calibration also allows computing the epipolar transformation used to warp the right image into the left for the correlation match algorithm. The resulting disparities are passed to a transputer network where edge pixels of similar  Chapter 3. Evaluating Task Parameterizations  48  orientation and depth are grouped and fitted to a model. The major components or axes of the part will tend to give strong world edge responses, relative component positions matched to the edge positions give us an estimate of part position. Simple arm sequences parameterized by the computed location are executed to pick and place the part in a desired final location.  3.4  Framework Summary  The methods described in this chapter form a framework that can be applied in an ordered way to analyse and understand the information requirements for a task implementation. To begin, we hypothesize all of the parameters which could possibly affect the performance of our system. This may initially be a large number; for our part orienting task we begin with 11 parameters for the push system and 9 for the vision system. These will not be of equal importance to system performance and since we want a minimal parameter set, we begin with a two level factorial experiment to identify the most important parameters and interactions. With as many as 11 parameters we will in fact use a fractional factorial experiment, confounding interactions involving large numbers of parameters. A useful property of these designs is they not only tell us which parameters are significant, but which are insignificant. It can be very useful to eliminate from consideration a parameter to which the system is insensitive. The next step is to ensure our systems are in their optimal operating range for these key parameters. To this end we begin to search in the range of treatment values or combinations which have shown promising outcomes in our initial experiments. By looking at the sensitivity of system performance to variations in the values, precision and accuracy of significant parameters, we can set these parameters appropriately. Sensitivity analysis also reinforces our experimentally determined parameter set—since the most sensitive parameters are generally the most critical. Our evaluations of precision and accuracy allow us to determine an e value for desired system performance. Finally cost/performance metrics allow us to quantify the difficulty of achieving this e and therefore the required performance level.  Chapter  3.  Evaluating  Task  Parameterizations  49  The steps of this framework can be summarized as follows: 1. Hypothesize relevant state parameters. 2. Identify significant system state variables via experimentation, using factorial design. 3. Use sensitivity analysis and/or response surfaces to determine the precision and accuracy required in each parameter to achieve desired system performance, possibly hypothesizing other relevant parameters (return to 1). 4. Use cost/performance metrics to quantify precision and accuracy costs. This type of empirical investigation gives a clear picture of the subspace of the task implementation and its precision and accuracy characteristics. On the basis of this analysis we can compare different complex task implementations.  Chapter 4 Push Orienting  In this chapter we describe the robotic equipment used and the first of our part orienting systems. Push planners compute sequences of fence angles used to orient parts, based on the physical properties of the part and fence and their interactions under quasi-static dynamics assumptions. The push planner we have implemented is based on the 1988 work of Peshkin and Sanderson [51, 50]. Push orienting was selected because, as a "sensorless" method it is in sharp contrast to the vision-based part orienter described in the next chapter. Very recently more direct methods for computing fence angles for fence feeders have been proposed [74], but the system described here, although slower,, is nonetheless effective.  4.1  Apparatus  The equipment used to demonstrate sensorless and sensor-based manipulation strategies consists of a CRS A-460 6-dof robot arm with kinematics and servo computations performed by a pair of INMOS transputers. The Kinematics transputer computes forward and inverse kinematics for the robot. The Servo transputer communicates with robot controllers to generate motion. These processors are in turn connected to a larger network of four transputers which run straight line interpolated motion routines, and in the sensor-based case compute the location of the part to be manipulated. The vision system consists of a pair of Sony CCD cameras connected to the DIGICOLOR board of the DATACUBE pipeline image processing system. The image pair is passed from the DIGICOLOR to a MaxVideo-200 board to compute correlation stereo disparities, which are in turn passed across a British Aerospace data bridge to the same network of transputers. This configuration is illustrated in figure 4.1. Programs on the transputers  50  Chapter 4. Push Orienting  Figure  4.1:  51  T h e c o n t r o l l e r a n d i n v e r s e k i n e m a t i c s for t h e C R S A 4 6 0 r o b o t r e s i d e  o n a p a i r o f d e d i c a t e d I N M O S T r a n s p u t e r s . T h e stereo p a i r o f c a m e r a s are a t t a c h e d t o t h e D A T A C U B E P i p e l i n e P r o c e s s o r r u n n i n g a c o r r e l a t i o n stereo a l g o r i t h m . T h e D A T A C U B E M a x V i d e o - 2 0 0 d u m p s any image displayed to the D I G I C O L O R b o a r d t o one of t h e n e t w o r k o f t r a n s p u t e r s v i a a B r i t i s h A e r o s p a c e t r a n s p u t e r store.  frame-  T h e t r a n s p u t e r n e t w o r k c o m p l e t e s s e g m e n t a t i o n a n d l o c a l i z a t i o n of p a r t s  a n d guides the robot i n m a n i p u l a t i n g t h e m .  a n d D A T A C U B E h a r d w a r e are i n i t i a t e d f r o m a S U N w o r k s t a t i o n . T h e e n t i r e s y s t e m is calibrated by m o v i n g the a r m through a cube of w o r l d positions a n d d e t e r m i n i n g the image location of a circle on the robot gripper.  T h e r e s u l t i n g set of w o r l d a n d i m a g e  p o i n t p a i r s are u s e d t o c a l i b r a t e b o t h c a m e r a s t o t h e r o b o t ' s w o r k s p a c e .  4.2  The Robot  T h e C R S A - 4 6 0 r o b o t is P U M A - l i k e a n d has 6 degrees o f f r e e d o m . It is a s t a n d a r d l i g h t i n d u s t r i a l m a n i p u l a t o r p r o d u c e d by the C R S P l u s c o m p a n y of B u r l i n g t o n , O n t a r i o . T h e transputer-based  i n v e r s e k i n e m a t i c s p r o v i d e d b y C R S h o w e v e r , has a n u m b e r o f i d i o s y n -  crasies i n c l u d i n g r a t h e r u n g r a c e f u l b e h a v i o u r n e a r s i n g u l a r i t i e s . O t h e r r o b o t l i m i t a t i o n s i n c l u d e w r i s t r o t a t i o n l i m i t s o f p l u s or m i n u s 180 degrees, ( r a t h e r t h a n + / - 360) w h i c h  Chapter 4. Push Orienting  52  cause frequent wrist flips in trajectories which attempt extended straight line motion. The robot has been suspended base uppermost from an aluminum frame upon which lights and cameras are mounted. The work surface is 105 cm below the robot base.  4.2.1  Straight Line Interpolated Motion  Routines for straight line interpolated motion are required for the pushing operations and were also used in generating vision-based pick and place motions. The inverse kinematics routines provided by CRS were used to generate trajectories, despite rather mysterious behaviour in regard to changes in arm configuration. For this reason trajectories are position interpolated rather than using velocities. The classic method for generating smooth motions picks knot points along the trajectory and computes joint positions at these locations [67, 26]. Trajectories in joint space are then approximated by a spline so that they can be sampled at a high rate by the controller. The spline values of rnid points between knots are then compared to the desired world position determined by the world trajectory. If the approximated position is too far from the desired trajectory a new knot point is added and the spline is recomputed. Joint velocities were constrained to be zero at the start and end of the motion. Sudden wrist flips resulting from joint limits and not warned of by the inverse kinematics, appear as discontinuities in the joint trajectories, allowing the system to detect them, and reposition the wrist to make the straight line motion possible.  4.3  Push Planner  The implemented planner for push orienting parts is essentially a version of Peshkin and Sanderson's work on fence feeders, applied to a robot actively pushing the part with a fence [51]. The planner is based on a construct called the configuration map M , a a  matrix which groups discretized initial part orientations according to the final orientations which result after a push at a fence angle a. This idea is illustrated in figure 4.2. A part contacted by a pusher at a corner will rotate into a stable alignment with the fence. Thus a range of input orientations converge to a single (or in the discretized case a small range  Chapter 4. Push Orienting  53  incoming part orientation 360  360  final orientation (136°fence)  initial orientation (126°fence)  360  initial orientation  final orientation  360  part outcome  Figure 4.2: A sequence of two configuration maps reduce the possible outcome orientations to two. of) outcome orientation as suggested in the figure. Which outcome occurs is the result of which corner feature is contacted, and of rotation direction which is determined by the friction between part and fence, and the position of the centre of mass of the part [41]. One of the parameters tested in our experiments is the discretization for fence angles. The available fence angles determine the possible outcome angles Of for each push, and hence the outcome bands F. Fence angles range over 180 deg at intervals determined by the discretization factor. Initial part orientations  are tested at 1 deg subintervals of the  360 deg range to generate the configuration maps. A range of input orientations which maps to an outcome Fj is labelled Ij. These intervals will be referred to only by their indices in the representation below. Consider a sequence of two pushes, one at angle a the next at angle 7, written M M . As illustrated in figure 4.2 the outputs of M become 7  a  a  the inputs of M . This product is summarized by a code set 7  7 0  C  ^ c , = { f c r / , n F }. a  k  In other words the outputs k of the first map become the inputs of the second map and are in turn mapped to output interval j. A particular code set Cj indicates which inputs k are mapped to outcome interval j. To determine the outcome of sequences of pushes,  Chapter 4. Push Orienting  54  products of pairwise code sets can be computed. To compute a product ^' ^C Peshkin a  and Sanderson use  (J  P> "Cj = a  c.  1,a  k  he P--<Cj  For example, the 2 push sequence illustrated in figure 4.2 for the angle bracket part gives rise to the code sets 126,136 1 2 6  .  1 3 6  C  C i g o  =  3 2 3  126,136^  {  = =  2  2  3  }  5  { 3 5 0 } ) {  1  3  3  }  To compose a third push at angle 178 deg we use the code set 1 7 8  -  1 2 6  C  i78,i26  and compute the product  178  > - C 126  1 7 8  '  1 2 6  '  1 3 6  1 2 6  C  =  ^  l g 0  }  follows:  136  a s  178,126,136/^  -  = {53,323},  3 0 8  C g i  1 7 8  _  I I  126,136r<  C 0 8 = {133,350}, 3  178,126,136^  =  {223}.  Planning is achieved by considering the tree of all push sequences, generated by combining code sets. The push orienter alternates pushing from one side then the other, hence the outcome of a push is 180 deg offset when considering the next push to apply, essentially the viewpoint of the push changes. Code sets for odd and even pushes are computed and as the tree is constructed each layer alternates direction. Peshkin proposes two pruning techniques. The first is based on the fact that if a solution exists from a node via some fence, it will also exist for a steeper fence with common code sets, thus the less steep arc can be pruned. The implemented planner generates code sets from steepest to shallowest angle, and compares code sets as they are generated, eliminating shallower fences with identical code sets. Second if a fence angle recurs in the search path leading to a node with more possible outcome orientations than its first occurrence, the branch can be pruned since any solution will be reached sooner from the earlier node. Our implementation requires a file containing the coordinates of part corners (or the corners of the part's convex hull) centred at the part centre of mass, and the coefficient of  5 5  Chapter 4. Push Orienting  friction between the part and the fence. Depending on the part and the discretization in fence angles used the planner will generate numerous feasible push plans. In our experiments we look at various criteria for determining the "best" plan including optimization for minimum number of pushes, minimum push distance and overall "shallowness" of ex's (to avoid the part sliding off the fence).  4.4  Robust Push Plans  Figure 4.3 illustrates a danger in depending on the rules of rotation direction when there is uncertainty in the locations of vertices, the centre of mass, or the coefficient of friction. Instead of the transition from clockwise to counter-clockwise rotation occurring at a known part orientation, it may occur anywhere in the range 7. A problem we have encountered with Peshkin and Sanderson's push planner is that it does not distinguish transitions which are "robust" in the sense that the planned transitions after the first push are not near these singular transition points. Brost has examined the inclusion of uncertainties in grasp planning [10], and has identified properties of stable squeeze grasps which are very similar to push operations. To combat this problem we have added one-to-many mappings in our code sets to account for uncertainties. In the case of no uncertainty we can solve for the part angle 9  S  where the transition from clockwise (cw) to counterclockwise (ccw) rotation occurs. The current contact feature f; = {fi ,fi } x  y  and the centre of mass C M = {CM ,CM } X  y  will  be rotated by the unknown 6 so that the vector i\CM aligns with the mid (deciding) S  Chapter  4.  Push  56  Orienting  Figure 4.3: For the exact and perturbed parts on the left the push outcome will differ only in marginal cases, such as that illustrated on the right. For the part orientations 9 from the point VR = VC M, where the exact part reaches the transition from counterclockwise to clockwise rotation, through the range 7 the perturbed part will continue to rotate counter-clockwise. 0  Chapter 4. Push Orienting  57  vector for the configuration. Let the angle of the mid vector be (3, =  {fix cos 9 - f  CM  =  {CM  n  _  fi,0  e  P  S  ,  (CM ( . C  s  3  /  sin $ +CMy  cos8 )-(fi  cos0 -CMy  sinB )-(Ji  S  S  s  sin 6 + CM cos 9 }, S  cose -/i s  s  S  a  sint9 )  ! /  s  cos6 )  y  S  cos9 -fi  x  y  cosfl )  s  i : c  sin6 +fj  x  B  S  sin 6 +fiy  x  g _CMj sin^)-(/  cos 0 },  iy  X  sin63+CMy cos8 )-(f,  X  M l C o s  [CM  X  S  S  (CM  x  sin 9 + f  ix  S  a r C T a n  tan (3  S  cos 0 - CMy sin 6 ,CM  x  —  sin 0 , f  iy  s  sin8 )  y  s  sin g 3 ( C M - / , ) + c o s g ( C M y - / i ) ,  I  cos0 (CM -S s  x  tan /3 cos 0 (CM S  =  s  s  )-sin9 (CMy-  f)  !  iy  s  - f)  x  ix  - f)  X  w  - tan (3 sin 8 (CM  sm9 (CM  S  cos e  i x  a  -  X  tan (3 cos 6 (CM =  I  ix  + cos6 (CM s  - cos 6 {CM s  tan /3 sin 0,(CM  y  y  -  f) iy  -  f) iy  f) iy  + sin6 {CM  - f)  y  -  y  S  iy  X  -  f) ix  ( t a n - tan (3f - CM + f ) ix  =  y  iy  sin^ (tan j3CM - tan/9/ y  s  (tan PC M -t&npf -C x  (tan 0C  =  ix  I2/  + CM  X  -  f) ix  My + fjy) My-ttuiPfiy+CM -fi ) x  x  tan6 . s  We can compute 6 for each vertex as follows: first determine which of the three S  vectors deciding rotation sense is between the other two. The transition from clockwise (cw) to counterclockwise (ccw) rotation occurs when this mid vector v^ia is aligned with the vector V C M = fiR-e — C M R j from the vertex f; (of the rotated part) to the centre of mass C M .  Since  Vmid  is fixed by the fence angle, push direction and coefficient of  friction, we can find the 6 that results in this alignment. We can use uncertainty bounds for Centre of Mass, part vertices and friction, to bound the range of 0's in which 6 may fall. To resolve the problem that part orientations in S  this uncertainty region will map to two possible outcomes, we simply add each of the discretized thetas in this range to the code sets for both outcomes.  As the planner  searches for a plan sequence which maps to a single output part orientation, these extra mappings will be resolved like any other. This method for counteracting uncertainty was  Chapter 4. Push Orienting  58  also tested as an experimental option with limited success. Adding possible outcomes can make finding a plan with a unique outcome difficult or impossible. In other words by subtracting away all the uncertain transitions we may not be left with a viable plan.  Chapter 5 Vision-based Part Orienting  5.1  Introduction  Vision for robots must provide 1) efficient use of time and other resources 2) timely, relevant information on which the agent can act, and 3) sufficient accuracy in localization or matching observations to a map or model to perform a task reliably. To achieve these characteristics, systems must frequently exploit prior calibration, restricted processing windows and efficient specialized image processing hardware. In this chapter we describe in detail some of the steps required to make a calibrated stereo vision system work with the speed and reliability required for a robot arm manipulating simple parts. Most of the material in this chapter has been previously published in [46] and [47]. The system described here is representative of visual part orienting. It uses a stereo pair of cameras, calibrated to a robot arm's workspace to localize the parts to be positioned. A high speed pipeline processor computes the zero crossings of \J G for regions of 2  both images. The calibration mapping allows selection of these subwindows and a fixed range of disparities based on the robot's workspace region of interest. Calibration also allows rectification of the right image into the left according to the epipolar transformation using an image warping device, for use in the correlation match algorithm. The resulting disparities are passed to a transputer network where edge pixels of similar orientation and depth are grouped and fitted to a model. Simple arm sequences parameterized by the computed location are executed to pick and place the part in a desired final location. These essential ideas of calibration, windowing, rectification by warp, correlation stereo and simple localization by line fitting form the basis of a viable and fast stereo vision system for manipulation. The idea of transforming a stereo pair of images so that the corresponding epipolar  59  Chapter 5. Vision-based Part Orienting  60  lines are aligned with the x axis has been proposed as a means of reducing the search required to determine correspondences [4, 28]. Essentially searching along the epipolar lines reduces the search to one dimension. Rather than aligning the epipolar lines with the x axis we warp only the right image, so that for each disparity along the epipolar line, the right image pixels are moved into correspondence with those of the left image. To achieve this transformation at sufficiently high rates we have devised a biquadratic approximation, which can be used by an image warping engine to perform the rectification at frame rates. The notion of 'restricting attention' or selecting only a relevant subwindow of a standard 512 x 480 image is another key aspect of getting the information the robot needs for the job. Our agent is afixedmanipulator and therefore we can similarlyfixour region of interest to the environs of the work surface. Knowing the range of distances likely to be of interest we can also fix the range of disparities we check, further reducing computation time. The vision system localizes parts based on the disparities extracted by the high speed pipeline. There are two types of parts addressed here: right angle parts, including the shelf angle bracket used as the example in this chapter and the triangle and "L" parts used in the detailed experiments of chapter 6; and the curved "G" part. The right angle parts are localized by grouping edgels via a local Hough transform, then applying a 3-D line fit to determine the components or "arms" of the part to be manipulated. The two arm components of the part are associated by applying two constraints: that they must be approximately orthogonal in 3-space and separated at most by a distance proportional to the size of the part. For the "G" the localization method is based on its oval shape with a gap on one side. These are crude but sufficient methods for obtaining part location to within the tolerances defined by the task.  Chapter  5.2 5.2.1  5.  Vision-based  Part  Orienting  61  Stereo Vision Calibration  In order to relate the coordinate frame of a robot to that of its cameras, some level of calibration is useful. Since the cameras used are fixed to the frame suspending the robot we chose to perform a rather precise calibration. To that end a 2.5cm diameter white dot was affixed to the robot's gripper, and used as the target for generating pairs of world and image locations for both of the stereo pair of cameras. To generate calibration data the robot arm is sent to a series of world locations which form a 3D block of points. For each arm position a blob finder determines the image location for each camera and the DIGICOLOR frame grabber. The world location is determined by the commanded location as executed by the inverse kinematics of the robot arm. Having acquired a set of over 100 (world,image) point pairs we run a version of Tsai's camera calibration algorithm [70] (implemented for the LCI by R. Barman). The blobfinderused to determine the image locations of the gripper dot was intended to run without user intervention. For each new plane of arm positions it flood fills the whole image looking for intensity transitions of sufficient magnitude, and hence "white" regions whose sizes fall within the blob thresholds. As more positions are accumulated the program restricts its search region according to where other blobs in the same row and column have been found. When a likely blob has been found, its immediate region is histogrammed and the modes determined to find the optimal threshold for discriminating blob from surround. The centre of mass of the blob or dot image location is computed accordingly. For each camera the available implementation of Tsai's algorithm returns the centre of the frame buffer relative to the lens  (C ,C ),  plane  S,  (dx,dy),  the horizontal scale factor  x  x  y  the interpixel distance in the image  the first order radial distortion  ki,  and the  camera's true focal length / . These internal camera parameters allow us to transform pixels to image plane coordinates and vice versa, and to project points in the camera coordinate frame into the image. The external parameters calculated include roll, pitch  Chapter  5.  Vision-based  Part  Orienting  62  Figure 5.1: Geometry of a stereo pair with arbitrary relative camera positions. and yaw and the translation vector for the camera in the robot's coordinate frame. Thus we have transformations from the right and left cameras to the robot frame, and can compute the transformation from one camera to the other. The root mean squared error of the calibration is approximately 0.5 pixel, depending on the camera positions and the region of world points used. The size of the error is dominated by the poor kinematic calibration of the robot itself. The robot has high precision in repeatability, but tests indicate that depending on the robot configuration, the position resulting from a commanded world (a;, j/, z) position may vary by 1cm. Since we are interested in generating part locations for manipulation by the robot, the fact that robot inaccuracies are absorbed into the calibrated camera model is not a problem. We want to calibrate  relative  to the robot so that even though in an absolute sense the  position is slightly in error, the position is correct in terms of commanded robot positions.  5.2.2  Standard Transformations  Much has been written about depth from stereo images, but we will touch only briefly on the basics of the method implemented for sensor based manipulation. For a more complete exposition on the subject see Horn [29]. Having calibrated both cameras to the manipulator workspace we can proceed by  Chapter 5. Vision-based Part Orienting  63  assuming that the homogeneous transformation matrices from the right camera frame to the left R i, and left to right Ri , are known. Corresponding points in the right and left r  r  images will occur along epipolar lines. An epipolar plane contains the lens centres Li and L of the left and right cameras. An epipolar line occurs at the intersection of an r  epipolar plane with one of the image planes. For a particular stereo camera configuration all the epipolar lines in an image will originate at the projection of the other camera's lens centre. A world point P will be imaged on epipolar lines defined by a plane containing P, L and L\. In the left image a r  point projected at P\ could be any point on a ray from L\ through the actual world point P to the vanishing point. When we project this ray to the right coordinate frame we have a line from the projection of the left camera origin L\l to the projection of the vanishing point V\. Thus in the right image the corresponding P will appear on the epipolar line r  bounded by projections of L\ (Lit) and the vanishing point VJ of the ray from L\ to P, VJ/. This arrangement is illustrated in figure 5.1. These epipolar lines constrain where we search for correspondences in one image given an image feature in the other image. Once a corresponding pair of image points (P/,P ) is located on the appropriate r  epipolar line, the depth of the world point they depict can be computed. For a camera c we know the ratios — = — and — = —  from standard projective geometry. Given the transformation R i from right to left and r  ( 7 , Yi 1) 3  t  w  e  a  l  s o  know the relationships:  where the r;j are the elements for R [. We know // and f from calibration, so we use r  r  Chapter 5.  Vision-based Part Orienting  64  the first and third equation to solve for z , substituting: r  Ci  (ro,o^ + ro,i^ + r , ) ;  c c c c  ^0,3;  0  2  2  3  4  ^2,3;  5  The result is the following solution for z : r  I C +C =C \ { -C (C z + C = zi) j lZr  3  4  (Ci - C C )z 3  4  r  -  5.2.3  2  3Zl  r  5  + (C - C3C5) = 0 2  C3C5 —C2 (d-C C ) 3  4  Edge-based Correlation  Biilthoff, Little and Poggio propose a three stage parallel method for computing optical flow [12]. First a matching phase compares (ANDs) two images at pixel offsets which represent various point velocities. The matched image features may be edges or raw or filtered intensities. For each offset match an excitatory neighbourhood is summed, on the assumption that if the motion field is locally uniform for the velocity indicated by the offset, a large total excitation will result. A winner take all strategy selects the offset for each feature which generates the maximum excitation. A similar approach can be taken to compute correspondences for stereo matching. The images are now a pair in space not time, and the offsets, or disparities of interest are those along the epipolar lines described above. In the interest of reducing data and therefore processing time, the implemented stereo algorithm uses the "edges" or zero crossings of \/ G, as features giving a rather sparse set of correspondences. 2  5.2.4  DATACUBE Pipeline Implementation  The implemented vision system uses a pipeline consisting of a Digicolor colour frame grabber and MaxVideo 200 (MV200) image processing system. The MV200 includes a  Chapter  5.  Vision-based  Part  Arithmetic Unit  Orienting  Advanced Pipeline  MiniWarper  convolver  Warp Mem  Analog Generator  Input  Linear Nonlinear  Analog Scanner  Statistics Output  AB L U T  L  Video Input Device (A/D)  •  3J Mem 0 left image workspace subwindow  Mem 3  Mem 1 right image  Mem 2 left edge  match/ score  workspace subwindow  Mem 4  Mem 5 left img  disparity map  left edge  warped right edge  Figure 5.2: Relevant components of the MV200. The mapping of intermediate images to memory is indicated.  Chapter 5. Vision-based Part Orienting  66  crossbar and 6 frame memories, as well as analog and digital grabbing and display functions, and convolution and arithmetic operations. In addition a MiniWarper is installed on the MV200, allowing polynomial and arbitrary warps of images. Most operations in the pipeline can be performed at 30Hz for a 512x480 pixel image. Computations and algorithms are not particularly flexible because they are specifically mapped to the hardware elements. Each frame memory can transmit and receive one image in each direction per pass interval, so the mapping of intermediate images to memories is important. Naturally the lookup tables (LUTs), convolver and arithmetic unit provided can only be allocated to one operation on images per time cycle as well. The components of the MV200 and the memory allocation used by the implemented stereo algorithm is illustrated in figure 5.2. In DATACUBE parlance allocated image regions in memory are called surfaces, operations moving images from a transmitting source image to a receiving destination image are referred to as pipes. Two pipes are set up to run continuously at 30Hz. The grabbing pipeline continuously acquires image pairs from the cameras and stores them in the indicated surfaces on MEM1 and MEM2. The display pipe continuously outputs the surface on MEM5 composed of four display windows containing the left image, the disparity image, the left edge image and warped right edge image. These connections are illustrated in figure 5.3. The computation of stereo disparities via the MV200 is described in a schematic form below. The operations under each pass take place simultaneously. After the image is acquired the algorithm is performed on smaller image subwindows extracted based on the size and location of the workspace region of interest. Windows smaller than the NTSC standard 512x480 pixels can be processed at correspondingly higher rates. Usually eight disparities are tested for each image pair although our experiments tested larger numbers of disparities. The disparity value represents a distance along the epipolar line for a particular pixel in the left image, thus the right image is warped based on a different transformation for each disparity.  Chapter  5.  Vision-based  Part  Orienting  right image workspace subwindow  Digicolor  tTTr-  left image workspace subwindow  left img  disparity map  left edge  warped riant edge  Digicolor  ool  Figure 5.3: Stereo image acquisition and display of the subdivided result window from MEM05 run continuously.  CORRELATION STEREO A L G O R I T H M P A S S 1: disparity-map A left-edge-image  disparity-map  right-img-subwind ® y G  orig-right-edge  P A S S 2: disparity-map orig-right-edge  =?• 11 11  0 < pix  < 4  else  display [dispar-map ]  =>• edge  1  => 0  J  =>- orig-right-edge  P A S S 3: left-edge-img  display [left-edge  left-img-subwind ® \J G  left-edge-img  const [0  disparity-map  orig-right-edge  MiniWarper [MEM ]  2  P A S S 4: left-edge-img ®  11 11  0<  pix else  <  4  edge  0  left-edge-img  left-img-subwind  display [left-image  const[0 ]  max-score  Chapter 5. Vision-based Part Orienting  68  FOR (cur-dispar = min-dispar, max-dispar) PASS 5: Mini Warper [MEM ]  Mini Warper [MEM  PASS 6: Mini Warper [MEM ]  warped-right-edge  warped-right-edge  display [right-edge ]  PASS 7: =>•  left-img-subwind  display [left-image ]  11...1 ... 11...1  left-edge-img A warped-right-edge  =>- match-score  PASS 8: MAX match-score , max-score  max-score dispar-map  max-score  =^  dispar-map  MAX= max-score  const [ cur-dispar ] else =>-  end FOR  )  dispar-map  PASS1 ANDs the disparity map and the left edge image, leaving the disparity map with values only at edge locations. At the same time the right image subwindow is passed through the convolver where it is filtered with V ^ > 2  a  constant value (15) is added to  the results to suppress values wandering near zero. A zero or one determined by the sign of the convolved values is stored in the right edge surface. PASS2 passes the current disparity image to the display window and convolves this sign surface with a 2 x 2 array of l's. If the outcome is between 1 and 3 a sign change, or zero crossing is deemed to have occurred. The lookup table in MEM2 is set to return the edge constant (0x71) in this range, and zero otherwise. PASS3 sends the previously computed left edge image to the appropriate display window, computes the sign of \/ G for the left image subwindow, initializes the disparity 2  surface to a constant value, and passes the right edge image to the memory surface inside the MiniWarper. PASS4 computes the zero crossing edges for the left image via  Chapter 5. Vision-based Part Orienting  69  convolution and lookup in the same manner as the right image. The left image subwindow is passed to the display surface and the surface containing the maximum score values for each image location is initialized to a constant. P A S S 5 - P A S S 8 are performed for each of a range of disparity values. PASS5 is actually a sequence of short passes where subwindows in the right edge image stored in the MiniWarper are warped according to disparity dependent coefficients (see section 5.2.4 below). P A S S 6 sends the transformed right edge image to the warped right edge surface. At the same time the previous warped right edge is sent to the display surface. P A S S 7 displays the left image and performs the match and score computation. Matching is done via the lookup table AB-LUT for which the 3 nibbles of its 12 bit address can have separate sources. The low nibble of the left edge image feeds the LUT low nibble, a constant zero feeds the second nibble and low nibble of the warped right edge feeds the LUT high nibble. The LUT is set to zero except address 257 (0x101) which indicates a match. The output of the LUT is piped to the convolver where it is convolved with an 8 x 8 matrix of l's. The resulting score or excitation value is transmitted to the match score surface. PASS8 uses the arithmetic unit (AU) to perform two tasks simultaneously. The Nonlinear part of the AU uses ALUs to compute and output the pixelwise maximum of the current match score and the previous maximum score. The new maximum is then compared to the old maximum for each pixel producing a binary equal/not equal result which is transmitted to the Linear part of the AU. The Linear part outputs either the value of the previous disparity map or the current constant disparity value depending on the comparator result. The updated maximum scores and disparity map are stored on the appropriate surfaces. The program cycles through these passes at a rate of about 4Hz (depending on the number of disparities searched) until terminated.  Chapter  5.  Vision-based  Part  70  Orienting  Figure 5.4: a) the left image, b) the XJ G zero crossing (edge) image. 2  c. Figure 5.5: a) Right image, b) the \J G zero crossing (edge) image, c) the right edge image warped to match the left image 2  Windows and Disparities In order to obtain useful results from this algorithm we would like to choose the fixed range of disparities used, to reflect likely matches in the manipulator workspace. To determine these disparities we specify a region at the work surface by its corner locations. Currently we use a 25x60x10 cm region to generate the workspace windows and appropriate disparities. The location and dimensions are specified by creating a file of coordinates and are thus easily modifiable. We iterate through points in this region on a grid of about |cm granularity, projecting them according to the left and right camera models. The disparities or differences between X coordinates in the left and right image, are computed and histogrammed. From the histogram we observe an approximately normal distribution of observed disparities. We therefore choose a range of disparities ± 1 standard deviation about the mean observed disparity. The searched disparities are then at discrete points in this range determined by the number of match iterations performed  Chapter 5. Vision-based Part Orienting  71  Histogram of Projected Workspace Points "plts/wind_hist" Hits Projected from Workspace  Figure 5.6: Histogram of points in the robot workspace projected into the image frame  (8+)At the same time the disparities are tested, pixels in maps of the right and left camera images are incremented each time a workspace point is projected to them. Afterward a second procedure selects the workspace image window by iteratively removing the border row or column with the lowest number of hits from each map, until five percent of the hits have been removed. The remaining windows contain 95% of the projected points from the modelled workspace. The position and size of these windows determines the subwindows extracted from each full original camera image (see fig. 5.2).  Warping Regions As described above, we want to match a point in the left image (u/, vi) to its corresponding point in the right image  (u ,v ) r  r  .  We know that  (u ,v ) r  r  will fall on the epipolar line in  the right image denned by the projection of L\ (L' ) and the vanishing point V\ (V/). L\ t  can be precomputed given our calibrated camera models by projecting the origin of the left camera into the image plane of the right camera. For each  (ui,vi)  we can compute  Chapter  5.  V{ = {xv,yv)  Vision-based  Part  Orienting  72  by  _ f (fo,o"i+ro,itJi+ro,2/i)  x  y _ {/V  f (ri,om+ri,iU)+ri,2jf|) J r (r ,oM(+r2,if|+f2,2/()' 2  where i?/ = {r j} is the rotation matrix from the left to the right camera frame. We can r  t  also compute the pixel coordinates of (uy, w) of V/ using the internal camera parameters computed during calibration. We can now compute the equation of the epipolar line in the image, =  ( « V - « L ) '  E  b —v e  um,  L  L  e  The DATACUBE MiniWarper contains a memory store, an address generator and interpolating multiply and accumulate (IMAC) device. A transmit gateway produces a stream of target pixel locations, the address generator then computes the addresses of the source pixels to be moved to these locations. The IMAC performs nearest neighbour interpolation on neighbourhoods transferred by the transmit gateway. The warping operation is specified by a polynomial equation, where the values of u and v are the target t  address and the results u = f(u ,v ) t  s  which the final value at (u ,v ) t  t  t  and v = g(u ,v ) s  t  t  are the source address from  t  is to be obtained. The MiniWarper allows biquadratic  polynomials of the form u  = a ,ou  2  s  2  v = b u  2  s  2fi  + a,v  2  0 2  + a  1 A  u v t  + b v^ + b u v 0>2  1A  t  t  + a  t  l f i  u  t  + a ^v 0  t  +  a ,o 0  + biflUt + b ,iv + 6,o0  t  (5.1)  0  Different coefficients can be specified for subwindows of the warped image. As we search the range of disparities d = (u — u{) for points corresponding to pixels r  in the left image, we move along the epipolar line in the right image. We would like to transform the right image for each disparity so that the left and warped right edge images can be simply ANDed as indicated in P A S S 7 above. This requires that for location (ui,vi)  we move a corresponding pixel (u ,u ) from along the epipolar line in the right r  r  image, to the address (u/,u/) . Knowing m and 6 , for some disparity d we determine e  e  the address of the source pixel in the right image by: u  = ui  r  v  r  = (m  +  d,  + d)m  e  + b. e  Chapter 5. Vision-based Part Orienting  73  We cannot exactly reproduce the epipolar line transformation with the set of biquadratic coefficients, we therefore approximate m and b . Because the warper's source e  e  pixel address computation is restricted to the basis {1, u, u , uv, v, v } and the m term 2  2  e  must be multiplied by ui, m is approximated using the linear basis {l,u,u}. The ape  proximation proceeds as follows: the current image window is sampled and m and b e  e  are computed at each sample point. Singular value decomposition and back substitution are then performed to approximate the m surface using the reduced basis, and the b e  e  surface using the biquadratic basis. The error is summed over all pixels in the window and if it is too high, the window is subdivided and the process is repeated ,on the new subwindows. Iteration terminates when the error is sufficiently small (99% of the image points have less than one pixel error in the estimated v, \v — v\ < 1, or the maximum limit on approximation regions is reached). A typical epipolar transform (v  =  r  f(ui,vi)),  its approximation and the difference surface between them are illustrated infigure5.7. For each subregion the approximated coefficients c- for m and Qj for b are used to 8  compute the source pixel address  r  = ci + c ui +  m  2  e  b  e  = gi + g2U\ + g uf  = (ci + c ui + c vi)(ui 2  + gv 5  = (gi + ^d) + (ci + c d + g )u 2  t  4  3  3  + (c  t  x  -f gz)u  2  t  3  + g )uiV[  + (c  2  2  6  t  + c dv + g + g u + g uf  t  g vf,  + g uiv  2  3  +  e  2  2  t  + d) + b  e  = cid + ciu\ + c dui + c uf + c u vi 2  h  + d) + gi + g ui -f g u  3  2  3  4  = m (ui  r  cv  + g um  3  v  e  for disparity d:  (u ,v ) r  e  4  3  + gv 5  t  + g vf 6  + g uivi + g vi + g vf 4  5  + (c d + g )v 3  5  t  6  +  g vf 6  For each disparity and region then we can compute a set of warper coefficients to be used in equation 5.1: «o,o — (g + Cid),  b ,o = d,  x  0  c d + g ),  «1,0  = (ci +  «2,0  = (C2  «1,1  = (C3 +  «0,1  =  (c d + g ),  bo,i  «0,2  =  96,  bo,  3  2  2  +53), ^4), 5  bi,o b ,o 2  bi,i  2  = = = = =  1.0, 0.0, 0.0, 0.0, 0.0.  P A S S 5 represents multiple warp image subwindows being transformed according to  their computed coefficients via individual passes. One difficulty with the warper is that if the source address is outside the image  Figure 5.7: a) True v = f(ui,vi), b) Approximated v = f (ui,V[) difference between the two surfaces. r  r  a  c) the  window, no information is available to map to the target pixel. This is particularly evident with the epipolar line transformation because the lines tend to converge at L\. However, since we are not using the full camera image, the additional image values are available. We attempt to ensure dense data in the image window of interest by extending the right edge image and warp image until 99% of the source pixels for the target pixels in the workspace window are included, or the warp window has been increased by 40% over the original workspace window.  5.3  Part Localization  The set of parts to be oriented by our system includes right angle parts (a triangular part, an "L" shaped part and the angle bracket) and a "G" shaped part. For the former we  Chapter 5. Vision-based Part Orienting  75  group pixels into edges in the image, then fit a 3-D line to the depth values corresponding to the grouped pixels. A perpendicular pair of such lines define the part model. For the "G" a simpler technique is employed, taking advantage of its oval shape with a 'gap' on one side.  5.3.1  Right Angle Part Localization  Disparity images from the MV200 are written to a British Aerospace transputer framestore. The system which computes depth values and object locations is distributed on a small group of transputers, which are connected to the transputers which control the robot arm. The object models and methods for localization are quite simple and deliberately so. The goal is to tease out what task information is being used, so the simpler the models and assumptions the better.  Grouping We use image grouping to guide grouping of 3-D points. The localization process begins by examining the image neighbourhood of each disparity marked edge pixel using a local Hough transform [19]. The standard Hough transform applied to edge grouping computes a value p =  cos 9 +  sin 9 for every edge pixel, for some discretization of 9 = [0, TT]  and p. Peaks in the (p, 9) histogram indicate many edgels voting for the lines defined by these parameters. The variant we have applied uses a precomputed neighbourhood template of 9 values with respect to the central edge pixel illustrated in figure 5.8. This allows each other edge pixel in the neighbourhood to vote for a 9 defined by its pairing with the centre pixel. For each edgel this template is applied, a histogram constructed and the peak 9 is chosen. The appropriate p is computed and the global (p, 9) histogram is incremented. Each edgel is associated with only one point in the parameter space. We achieved better results with this method versus a true Hough transform, because it avoided repeated passes over the edge pixel set and computing p values for unlikely #'s at each point.  Chapter 5. Vision-based Part Orienting  76  1.77  1.57  1.37  2.03  1.82  1.57  1.33  1.11  2.36  2.16  1.89  1.57  1.25  0.98  0.79  2.68  2.55  2.36  2.03  1.57  1.11  0.79  0.59  0.46  2.94  2.90  2.82  2.68  2.36  1.57  0.79  0.46  0.32  0.24  0.20  0.00  0.00  0.00  0.00  0.00  999.00  0.00  0.00  0.00  0.00  0.00  0.20  0.24  0.32  0.46  0.79  1.57  2.36  2.68  2.82  2.90  2.94  0.46  0.59  0.79  1.11  1.57  2.03  2.36  2.55  2.68  0.79  0.98  1.25  1.57  1.89  2.16  2.36  1.11  1.33  1.57  1.82  2.03  1.37  1.57  1.77  Figure 5.8: A map of the line 9 values suggested by edgels in the corresponding positions relative to the current (central) edge pixel. Local grouping appears to generate better discrimination because points which may have appropriate (p, 9) values will not be assigned to a line that locally "looks" inappropriate.  When all edgels have been examined, peaks in a global 9 histogram determine which p histogram to search. As edgels conforming to these peak values are selected, they are further grouped based on their image distance from the current mass of points for the line. These distant points are split off as separate lines with the same parameters. We can also include edgels which fall in the parameter space neighbourhood of the peak (p,9). The discretization of p and 9 determine the power of the system to discriminate among lines which may lie close together in the image.  Matching These groups of edge pixels and their 3D locations are shipped off to another transputer where a least squares fit algorithm computes the parameters for a straight line through them in 3-space. These world points are then shipped off to another transputer where a least squares fit algorithm computes the parameters for a straight line. Some pixels are eliminated because their world locations would place them below the robot's work surface. The 3D lines resulting from a typical fitting process (ordered by number of points) have  Chapter  5.  Vision-based  Part  Orienting  77  been reprojected by the left camera model and are shown in figure 5.9 superimposed on the disparity edges.  Figure 5.9: Projections of 3-D lines fitted to edge data. The confidence ordering assigned to these lines is based on the number of pixel/world values used to approximate them. To separate the shelf angle brackets shown in the example images, or any right angle part, we apply two constraints to all pairs of lines: first is their 3-D distance too large to be part of the same bracket; and second are they orthogonal in 3-space. The outcome of this entire grouping, fitting and pairing process is illustrated in figure 5.10. Whether a left or right edge or some mixture of points is represented by a line, a bracket arm can still be grasped because the robot's gripper opens to a width of 5cm and the angle bracket arms are less than 1cm wide. Line 1 in figure 5.9 demonstrates such a mixture of points. By choosing the average x and y values as a grasp point, the robot can still successfully grasp the angle bracket. The entire process requires 5-7 seconds on the transputers, increasing with the number of line segments that must be fitted and tested. On an ULTRASPARC architecture similar computations require much less than 1 sec, but transmitting commands to the robot becomes difficult. Naturally this model is very simple and can be fooled by more complicated parts or scenarios, the point is however that for this task, on these parts, this simple part model with accurate depth data is sufficient.  78  Chapter 5. Vision-based Part Orienting Orthogonal lines located in 3-space Line 0 points o Line 0 fit — Line 1 points + Line 1 fit  Figure 5.10: The graph shows a pair of lines, satisfying the orthogonality constraint, computed by fitting the illustrated 3-D edge points.  5.3.2  " G " Localization  Again disparity images from the MV200 are written to a British Aerospace transputer framestore, but in the case of localization for the "G" part, much less effort is devoted to grouping and fitting. A depth location is computed for each disparity marked edge pixel and their centroid is computed. Points too distant to be part of the same "G" are discarded. A histogram is then computed of the orientation from 0 — 27T of each depth point with respect to the centroid. A fixed "gap window" of width Q bins is then slid across the histogram in search of the range of angles with fewest hits in the histogram. This range is then assumed to be the gap in the "G" (illustrated in figure 5.11), the centroid is assumed to be the centre of mass for the part.  Chapter 5. Vision-based Part Orienting  Figure 5.11: The "G" part is localized by histogramming the orientation of depth points relative to their centroid. A valley or minimum in this histogram should indicate the location of the gap in the "G" shape. The orientation at which this minimum occurs is assumed to correspond to the model angle in the figure and the centroid corresponds to the centre of mass. These two values determine the part's world position.  Chapter 5. Vision-based Part Orienting  80  Knowing the fixed relationship between the gap and the centre of mass we now know the world location of the "G" part. This method does not incorporate much knowledge of the part, and is rather 2-D in nature. As long as the gap is a visible feature however, it performs well as a localization scheme.  5.4  Summary  In this chapter we have discussed some detailed issues in implementing a calibrated stereo vision system for manipulation. The system described makes extensive use of a precise calibration among the robot arm and stereo pair of cameras. Knowledge of this relationship allows constraints to be imposed on the region of interest in the images and the disparity values of interest for the robot workspace; it also provides the epipolar transformation. The latter is used to generate a piecewise biquadratic approximation for the epipolar transform, which can be used by a special purpose image warper to rectify the right image for correlation matching with the left, to produce correspondences at frame rates. As part of a larger image processing pipeline, edge pixel correspondences can be generated from image pairs at 4Hz. Generally calibrated systems involve a tradeoff where match/calibration computations are pushed to the offline side of the information extraction process. The localization processes we have described for our right angle ("L" and triangle) and "G" shaped parts are relatively simple. Our idea was to build simple localization systems, sufficient to the orienting task rather than complex general purpose vision systems. One side effect is that if your model is not very precise or detailed, then the system can tolerate an equivalent amount of imprecision in the actual part. The result is a system which is insensitive to small variations in part features.  Chapter 6 Experimental Results  In this chapter we detail our experimental results. An analysis of these results is given in the next chapter. The goals of the following experimental exploration of our part orienting systems, set out in chapter 3, are as follows: 1) evaluate system performance with regard to the feature parameterizations represented by the implemented part orienting systems; 2) perform fractional factorial experiments to determine the relative importance of parameters and the degree of attention they deserve; and 3) apply sensitivity analysis to determine the tolerable levels of accuracy for the systems. In chapter 7 we apply a cost/performance metric to evaluate these empirical findings under a uniform cost measure.  6.1  System Task Parameterizations  The premise of this investigation is that we have one task and two implementations representing different parameterizations of the same underlying physical 'computation'. The features or 'measurements' used define the parameterization. Anyone who builds systems knows that they contain many parameters and thresholds which must be chosen by the designer: sometimes based on theory, sometimes via intuition and limited testing. All of these design choices affect the performance of the system. Below we enumerate the actual parameterizations of the part orienting task represented by the push planner and vision based systems in the preceding chapters.  Push Orienting Eleven parameters or factors were selected as representative for the push planning systems. These are described in brief in table 6.1. The parameters F, C and V indicate the accuracy of the measured values of the friction coefficient, centre of mass and vertices 81  Chapter 6. Experimental Results  Label  F C V D L S P N A B K  82  Push Orienting Parameters Parameter Description accuracy of measured friction between fence and part C of M accuracy of the relative position of the part's centre of mass accuracy of the set of relative positions of the {v ...,v } vertices of the part's convex hull the discretization factor of the set of possible S fence angles a limits limit on the range of fence angles searched steepness limits on fence steepness (qualitative) push distance limits on push distance plan length preference for plans with fewer steps friction uncertainty uncertainty compensation for friction measurement, avoid transitions which measurements errors make uncertain C of M uncertainty uncertainty compensation for Centre of Mass measurement, avoid transitions which measurements errors make uncertain vertex uncertainty uncertainty compensation for vertex measurements, avoid transitions which measureu  n  a  ments errors make uncertain  Table 6.1: Parameters for the push planners. respectively. D is the discretization factor for the fence angles and hence the branching factor for the tree search performed by the push planner, i.e. 7r/8 fence angles are cona  sidered. In our coordinate system fence angles near 0 or 7r are essentially end on to the part and are too steep to allow viable pushes. We therefore set a limit L on the set of fence angles and search only from  to p^. 2I  Our experience with executing push plans has shown that steeper fence angles are less reliable because, with limited fence length and push distance, parts tend to "fall off" the end of the fence or wind up under the fence because they were left out of its range by the preceding fence angle. As a result we have added a constraint S to the push planner which prefers plans with shallower fence angles. This is achieved by evaluating the average and steepest fence angles and preferring plans with either a shallower (closer to |) steepest fence angle or an equally steep, steepest fence but a shallower average  Chapter  6.  Experimental  Results  83  steepness. We chose to evaluate the presence and absence of this constraint to determine its usefulness and its effects on other aspects of the planner. Inaccuracies in fence angle were tested as a possible source of error, but in general the fence angle and part outcome angle discretizations were so coarse that the tiny uncertainty in fence angle between the wrist encoder ticks had no effect. Peshkin [50] describes methods for bounding the required push distance to align a part with a fence, based on the slowest possible centre of rotation. We have used his formulation to compute a bound on maximum push distance for each push in a plan, depending on which transitions are required at each push. Minimizing push distance is another constraint P, we added because of the real physical limits of an implemented push planner. Plans with the shortest total push distance or shortest maximum push are preferred. In terms of costs for an active push orienter, each additional push is expensive because it requires a motion sequence. We have therefore added a constraint ./V which prefers plans with fewer pushes. Finally factors A, B, and K indicate whether the uncertainty compensation described in section 4.4 is turned on for friction, centre of mass and vertex uncertainty respectively.  Visual Orienting Nine parameters were selected as representative for the vision orienting system. These are described in brief in table 6.2. Two camera positions were tested under the assumption that if cameras were placed nearer to the work surface, parts would have greater extent in the image and therefore be localized better. Originally, the vision system was developed with the stereo pair of cameras fixed about a metre above the work surface on the robot's frame. To test the effect of camera position M, a second location on a pair of tripods about 50cm closer to the work surface and 50cm farther apart was tested. D and S are parameters internal to the DATACUBE stereo system. D is the number of disparities searched in the winner-take-all phase of the correlation stereo algorithm. S  Chapter 6. Experimental Results  84  Visual Orienting Parameters Label  M  Parameter camera position  D S C  °V G  A  warp accuracy  L Z  lighting Z constraint  P,Q  part model parameters  I  intensity image  {di,d } m  2  calibration accuracy  Description the position and orientation of the cameras fixed range of discrete disparities a for the edge detector number of points from which calibration is computed accuracy of the approximation for the epipolar warp degree of special lighting required elimination of world points assigned depth inconsistent with work table height 8 , Sg discretization factors for local Hough transform (triangle and "L") angle discretization and gap width for "0" localization P  Table 6.2: Parameters for the Visual Orienter. is the level of <r for XJ G . 2  Our calibration system typically acquires 100 to 180 image-world point pairs to compute trie internal and external camera parameters, which are in turn used to compute the warp approximation for matching. To test whether this level of calibration accuracy C is required we subsampled the calibration data acquired for each imaging condition to approximately 1 of the original set of calibration points, and computed a less accurate camera model. For the warp approximation we have allowed up to 12 subregions to be approximated separately, depending on the accuracy of the warp specified. To determine how necessary this level of approximation accuracy A is, we limited the number of approximation regions to 2 in the low case. We originally set up a pair of 150H^ photographic lights for our vision system to provide good, even contrast. For our experiments we decided to determine how crucial this lighting L was by testing the system with this special lighting and with normal room  Chapter  6.  Experimental  85  Results  (-2.4,6.5)  (-2.4,-3.2)  (4.7,-3.2)  Figure 6.1: On the left is an image of the Triangular part used in our experiments, on the right the model used by the push planner. lighting. The depth values generated by the DATACUBE system can be quite inaccurate. One way of eliminating bad points is to use the known Z location of the work surface to eliminate spurious points. To determine how important this constraint was to system performance, we set it to very close to the work surface and a relatively wide band around it (±20cm). The model parameters denoted P and Q are different for the triangle and "L" versus the "G" since the localization methods are different. For the triangle and "L", P is the discretization of the p dimension for the local Hough transform and Q is the size of the orientation template used to vote on the orientation for the current edge pixel. For the "G", P is the discretization of 27T used to histogram the orientation of depth points with respect to the centroid. Q is the window or number of bins in this histogram over which a minimum number of hits determines the angle of the "G"'s gap.  6.1.1  Test Parts  The experiments described in this chapter were performed on the parts pictured in figures 6.1, 6.2 and 6.3. In terms of the push orienting system in particular, these parts span  Chapter 6. Experimental Results  86  (-2.0,6.0)  (0.4,6.0)  (-2.0,-3.9)  (4.0, -3.9)  Figure 6.2: On the left is an image of the L-shaped part used in our experiments, on the right the model used by the push planner.  (2.4,4.6) 1(3.4,3.8)  ) (4.6, -0.1)  (0.0, ().())  0(4.4, -1.4) 9(3.8,-2.7) (-2.5,-3.2) (3.0,-3.6)  (-1.6,-4.0)  (1.9,-4.3) (-0.4,-4.5)  ( ( )  . ,_ . 7  4  6 )  Figure 6.3: On the left is an image of the G-shaped part used in our experiments, on the right the 20 vertex model used by the push planner.  Chapter 6. Experimental Results  87  a range of complexity. The triangle (T) has three stable sides. The "L" has two short sides which tend not to act as stable resting positions and the "G" has been modelled by a polygon with 20 vertices rather than a true curve.  6.2  Methods  Based on the parameterizations described above for the push planner and vision system, we have set up a | and | fractional two level factorial experiment for the systems respectively. Such experiments provide useful information on the significance of various factors and their interactions. The proportion of success values generated for the outcome of each treatment are generated at least in part by simulators. We have working physical systems, and both push plans and vision based manipulation have been demonstrated to work in the real world, but for the volume of trials required for statistical analysis we were obliged to resort to simulation. In particular to test the push planner's performance under variation in measured part parameters would have required many parts with known variation and we did not have access to such a test set. We do not have CAD models or other precise specifications for the parts used, so we cannot precisely state the bound on the error in our estimates of friction or centre of mass. This makes it difficult to precisely compare real world push planner outcomes to our simulations. Anecdotally, we actually used the simulator to determine why one of the triangle push plans we computed had real world success of only 30% for the part's nominal measured parameters. Perturbing these nominal values in simulation we saw the same drop in performance for even small perturbations, and tracked the problem to a push which depended on an alignment within the uncertainty range near the  cw/ccw  transition. Similarly we observed relatively poor  performance for many of the tested "G" orienting plans.  6.2.1  The | and | Fractional Factorial Designs  Clearly we cannot run 2 trials for as many parameters p as we have selected for the P  push planner and vision system. We will therefore use a | fractional factorial experiment  88  Chapter 6. Experimental Results  2 treatments tested for the push planner, and a | experiment with 8  2 ^ - 9 7 22 _  z  treatments tested for the vision system. The process of generating fractional designs was described in section 3.1.1. The identity relation used to generate the fraction of experiments for the push planner is / = CVDLSP  = FVDLNA  = FVSPBK  = FCSPNA  = CVNABK  =  =  FCDLBK DLSPNABK.  The generated fractional factorial experiment requires 256 trials, but provides information on the main effects and all two-factor interactions confounded with only three or higher order interactions. The identity relation used to generate the fraction of experiments for the vision system is I = MDSCAL  = MDSZPQ  =  CALZPQ.  (6.1)  This experiment requires 128 trials and also provides information on the main effects and all two factor interactions confounded with only three or higher order interactions. We generated these identities via brute force search for the identity which confounded the highest order interactions with main effects and two factor interactions. In other words, since we believe interactions of many factors will be less significant than those with relatively few, we want our low order interactions to be confounded with the largest interactions possible. This search process itself is time consuming for large numbers of parameters, although some texts on the Taguchi method, for example, provide precomputed designs which could be useful [55]. The specific treatments applied for our examples and the outcomes observed are tabulated in appendices A and B for the push planner and vision system respectively. In the text references to detailed calculations for two factor interactions refer to the idea that for effect AB we subtract the effect of B while A is low from the effect of B while A is high. A negative effect for AB can represent a large positive effect for B with A low, or a small negative effect etc. To determine the meaning of a significant interaction  Chapter 6. Experimental Results  89  we must frequently refer to the specific combination of treatments used to calculate the effect.  6.2.2  Simulators  The simulator for the push planner takes the part model and the selected plan (with associated maximum push distances) and outputs a success or fail based on the same cw or ccw rotation decisions used by the push planner. In addition, however, the simulator approximates the part's speed of alignment with the fence by sampling a normally distributed alignment distance with standard deviation ^ of the maximum push distance and adding it to the minimum alignment distance. This means plans can fail because the part never becomes fully aligned with the fence. The part's slide along the fence after alignment is simulated and may cause it to "fall off" the limited extent of our fence. If some inaccuracies are assumed in the measured features, a normally distributed perturbation is added to the appropriate vectors potentially causing "bad" rotations. Outcome proportions of success are based on simulating 1000 plan executions with uniformly distributed randomly generated initial part orientations. The experiments involving the vision system are based on several suites of test images obtained by using the CRS robot arm to place each of the parts in a series of positions and orientations spanning the workspace. Separate image sets were collected for each of the four camera position and lighting combinations (see table 6.10). The success rates obtained are based on using the known part position for each stereo image pair, compared to the position extracted by the vision system plus the constraints imposed by the arm and gripper. The image sets are based on a grid of 3 X positions by 5 Y positions plus 8 orientations at each (X, Y) location; in total 120 image pairs per set. For the vision system, only the graspability of the part, given estimated position from the vision system and known world position, is simulated. Essentially we use a model of the gripper approaching the estimated position, and the part in the known position to determine whether a collision or successful manipulation will occur. The simulator returns success or failure.  90  Chapter 6. Experimental Results  Parameter Measured Friction  Measured Centre of Mass Measured Vertex Position Discretization Factor Fence Angle Limits Fence Steepness Limit Push Distance Limit Plan Length Limit Friction Uncert. Compensation C of M Uncert. Compensation Vertex Uncert. Compensation  Parameter Levels Label High Value T (0.33, 0.04) F L Af(0.30,0.03) G Af(0.23,0.015) C Af(CofM, 0.25cm) V  jV(Vi, 0.1cm)  D  60 [5,(180-5)] deg TRUE TRUE TRUE TRUE TRUE TRUE  L S P  N A B  K  Low Value 0.33 0.30 0.23 Measured Value CofM Measured Position V 30 [30,(180 - 30)] deg FALSE FALSE FALSE FALSE FALSE FALSE  Table 6.3: Parameter values for push planner. Qualitative factors are indicated by TRUE and FALSE indicating whether the property is active. Af(fi,cr) represents a value drawn from a normal distribution with the indicated parameters.  6.3  Push Planner — Parameters and Outcomes  Table 6.3 indicates the actual values assigned to the parameters of the push planner for the treatments specified for our fractional factorial experiment. Many of these parameters are essentially treated as qualitative TRUE/FALSE options. See appendix A for tabulated treatments applied and probability of success outcomes observed. Tables 6.4 through 6.9 contain the analysis of variance computations for the proportion of success and time required for plan computation (in minutes) respectively, for the three parts under consideration. All main effect values are indicated but two factor interactions with F value less than F = 2.71 for a = 0.1, have been pooled. We are primarily a  concerned with rate of success, but the timing ANOVA tables presented give an empirical version of how various parameter settings contribute to planner complexity.  Chapter 6. Experimental Results  A N O V A - Success Rate for Triangle Source dof Sum of Squares Mean Squares F 1 0.017906 0.017906 1 C 0.530975 0.530975 V 1 0.007758 0.007758 D 1 0.202568 0.202568 L 1 1.616058 1.616058 1 0.478952 0.478952 S P 1 0.007521 0.007521 N 1 0.211720 0.211720 A 1 0.033667 0.033667 B 1 0.016727 0.016727 K 1 0.018701 0.018701 DL 1 0.777070 0.777070 LS 1 0.702501 0.702501 LP 1 0.483748 ' 0.483748 SK 1 1.223321 1.223321 Error e 176 0.163421 28.762029 Total T 255 35.091222 214.729460  F 0.109570 3.249130 0.047471 1.239553 9.888948 2.930793 0.046020 1.295551 0.206011 0.102358 0.114437 4.755029 4.298727 2.960139 7.485722 1.000000 1.000000  Table 6.4: Analysis of Variance summary for for main factor effects and significant 2 factor interactions for proportion of success for triangle part for the push planner.  Chapter 6. Experimental Results  ANOVA - Planning Time for Triangle Source dof Sum of Squares Mean Squares F 1 58.809727 58.809727 1 C 259.411289 259.411289 1 V 7.380278 7.380278 D 1 507985.834727 507985.834727 L 1 221988.211914 221988.211914 1 S 2058.890625 2058.890625 P 1 76381.989067 76381.989067 N 1 12387.690000 12387.690000 A 1 43.312852 43.312852 B 1 1184.650352 1184.650352 K 1 4798.736984 4798.736984 DL 1 185222.640625 185222.640625 DP 1 66471.582101 66471.582101 LP 1 36323.595156 36323.595156 DN 1 11541.473477 11541.473477 LN 1 8820.731602 8820.731602 PN 1 6407.668963 6407.668963 NB 1 6012.386984 6012.386984 DK 1 6370.700278 6370.700278 LK 1 8839.917101 8839.917101 Error e 171 301050.755885 1760.530736 Total T 255 1464216.369983 831.690319  F 0.033405 0.147348 0.004192 288.541304 126.091642 1.169472 43.385774 7.036338 0.024602 0.672894 2.725733 105.208411 37.756559 20.632185 6.555678 5.010268 3.639623 3.415099 3.618625 5.021166 1.000000 1.000000  Table 6.5: Analysis of Variance summary for for main factor effects and significant 2 factor interactions for planning time for triangle part.  Chapter  6.3.1  6.  Experimental  Results  93  Triangle Push Planner Results  Using our F test for significance we conclude that only those effects and interactions with F value greater than Fo.i = 2.71 make a significant contribution to the variance observed. For the triangle, table 6.4 tells us that the significant effects are C, L , S, D L , LS, LP and SK. The most significant effect is that for angle limit L, and referring to the effect table in appendix A.1.1 we see that the effect is negative. In other words, the rate of success when L is low is greater than that when it is high. S also exhibits significant positive effect. Clearly our efforts to limit steep fence angles are successful for the triangle part. Given that uncertainty in centre of mass C significantly degrades results, we see that plans are sensitive to inaccuracy in this measured feature. Uncertainties in friction and vertex location show less significant effects, probably because variation in centre of mass is very much like perturbing all vertices simultaneously, thus resulting in greater impact on outcome. For the significant interactions, the most significant effect is SK which is negative. In this case vertex compensation K has a large negative effect when steepness minimization 5" is used and a strong positive effect when S is not used. Since K has insignificant effect and is a qualitative factor we will simply set it low for future trials. We see significant interactions for L with D, P and S. For LP and LS the effects are positive, but we clearly want to set L low because of its individual negative effect and S high because of its positive effect. In fact, when L is low the effect of S is small so that setting S high and L low has little detrimental effect. In the case of D L we see a negative effect supported by the negative effect of L and the insignificant, but also negative effect of D.  Sensitivity for Triangle Push Planner The nonqualitative but significant effect demonstrated by our experiment on the push planner for the triangle is that of the fence angle angle limitation L .  We also see a  significant interaction between L and D however. Rather than attempting to optimize L alone, we performed a sensitivity analysis in D and L about their low settings. Because we are interested in optimizing our planner for the real world, simulations were run  Chapter 6. Experimental Results  94  with perturbations in part features (i.e. F, C, V high). The results of this search in D € (25,35) and L £ (25,35) are illustrated in figure 6.4. From the figure we observe  Success for Discretization Fence Angles/Angle Limits  Figure 6.4: Plot of orienting success versus fence angle discretization and angle limits for triangle part with push planner. that success rate oscillates with fence angle discretizations D.  For the region under  consideration the best results are at D = 26. A search along the D = 26 contour should yield the optimal L value for our triangle push planner. The result of such a search is shown in figure 6.5. The value L — 35 was chosen as the best value for the planner since it was among the high success values and not too near the dropoff at L — 39. The planner parameterization chosen as "best" was that with steepness 5", and pushdistance P high, and D = 26 and L = 35. Using this best planner a plan was generated for the triangle (goal 6 = |). This g  plan was then tested under various noise conditions for the vertex, centre of mass and friction measurements. Figure 6.6 summarizes the result of adding normally distributed perturbations to the indicated aspects of the part model. The standard deviation of these perturbations increased as the indicated proportion of the maximum radius from  Chapter 6. Experimental Results T  1  1  95 1  1  1  1  1  1  Success for Angle Limits, Fence Disc = 26  26  28  30  32  34 36 Angle Limits  38  40  42  44  Figure 6.5: Plot of triangle orienting success with push planner (out of 1000) for various angle limits given discretization of 26. the centre of mass to the vertices for the geometric features. For friction, the standard deviation was the indicated proportion of the nominal value.  6.3.2  "L" Push Planner Results  Again we examine our tabulated ANOVA (table 6.6) to determine the significant effects for the "L" part. The effect L is again the largest effect and is negative. For the "L" the D effect has increased to a significant negative effect and the interaction of the two D L is negative. This suggests another search in D and L to optimize our choice of these parameters. Other significant effects include the negative effect of noise in the centre of mass, and the negative effect of N minimizing the number of pushes in a plan. Although LS is a positive effect we again will choose to set L low because of its individual effect and interaction with D. The negative effect of S while L is low is again small. The effect of P with L low was negative so we set P low. Examining the negative effect for SK  Chapter 6. Experimental Results  uoo  1050  1000  0.02  0.03 0.04 0.05 0.06 0.07 0.08 Standard Deviation of Features as Proportion of Part Radius  0.09  0.1  Figure 6.6: The plots in the above graph represent the successes in orienting the triangle using the push planner, out of 1000 trials, given a normally distributed perturbation of model features with standard deviation indicated as fraction of the maximum distance from the centre of mass to a vertex. Friction \i perturbations had standard deviation equal to the proportion of the nominal friction value. Perturbation of vertices, Centre of mass and fi were tested separately.  Chapter 6. Experimental Results  Source  Error Total  F C V D L S P N A B K DL LS LP SP SK e T  ANOVA - Success Rate for "L" dof Sum of Squares Mean Squares 1 0.166842 0.166842 1 0.445341 0.445341 1 0.075321 0.075321 1 1.010810 1.010810 1 2.454696 2.454696 1 0.270991 0.270991 1 0.029939 0.029939 1 0.442584 0.442584 1 0.058166 0.058166 1 0.038863 0.038863 1 0.001625 0.001625 1 0.492794 0.492794 1 1.231358 1.231358 1 0.499915 0.499915 1 0.609442 0.609442 1 1.210644 1.210644 175 27.001814 0.154296 255 36.041145 233.584317  F 1.081314 2.886274 0.488159 6.551108 15.908996 1.756303 0.194039 2.868407 0.376976 0.251871 0.010529 3.193819 7.980487 3.239974 3.949822 7.846240 1.000000 1.000000  Table 6.6: Analysis of Variance summary for for main factor effects and nificant 2 factor interactions for proportion of success for "L" for the push planner.  Chapter 6. Experimental Results  ANOVA - Planning Source dof Sum of Squares F 1 656.854184 C 3630.313546 r V 1 53883.950017 D l 3548043.140625 L l 2114116.000000 l S 6661.960434 78328.849484 P.. l N l 89642.855017 A l 189174.253403 B l 516.804444 K l 37293.242296 l FC 111347.347656 DL l 1908985.506984 DN l 88619.088025 LN l 100115.551775 DA l 169604.978477 LA l 108075.192713 PK l 119858.767539 BK l 137319.654444 Error £ 172 4907530.760911 Total T 255 13773405.071975  Time for "L" Mean Squares 656.854184 3630.313546 53883.950017 3548043.140625 2114116.000000 6661.960434 78328.849484 89642.855017 189174.253403 516.804444 37293.242296 111347.347656 1908985.506984 88619.088025 100115.551775 169604.978477 108075.192713 119858.767539 137319.654444 28532.155587  F 0.023022 0.127236 1.888534 124.352439 74.095909 0.233490 2.745283 3.141819 6.630212 0.018113 1.307060 3.902521 66.906459 3.105937 3.508867 5.944345 3.787838 4.200831 4.812803 1.000000  482.732720  1.000000  Table 6.7: Analysis of Variance summary for for main factor effects and nificant 2 factor interactions for planning time for "L" part.  Chapter 6. Experimental Results  99  suggests a strong negative effect of K when 5" is high, so we set K low.  Sensitivity for "L" Push Planner Again the quantitative factors of interest are D and L at their low settings and we perform a search over D G (25,35) and L G (25,35) with the noise conditions F, C and V high, the results of which are illustrated in figure 6.7. Examining the graph we again see the  Figure 6.7: Plot of orienting success versus fence angle discretization and angle limits. oscillation effect with respect to D. For the "L" however, the best value is D = 30, and in the same plot we can locate the peak at L = 31. We can then select the best planner options as steepness minimization S high and all other parameters low, with D = 30 and L =  31.  Using this best planner a plan was generated for the "L" (goal 6 — |). This plan g  was then tested under various noise conditions for the vertex, centre of mass and friction measurements as described for the triangle. Figure 6.8 summarizes the result of adding normally distributed perturbations to the indicated aspects of the part model.  Chapter 6. Experimental Results  550  1  0  1  0.01  1  0.02  100  ' 0.03 0.04 0.05 0.06 0.07 0.08 Standard Deviation of Features as Proportion of Part Radius 1  1  1  1  1  1  1  0.09  0.1  Figure 6.8: The plots in the above graph represent the successes in orienting the "L" using the push planner, out of 1000 trials, given a normally distributed perturbation of model features with standard deviation indicated as fraction of the maximum distance from the centre of mass to a vertex. Friction p, perturbations had standard deviation equal to the proportion of the nominal friction value. Perturbation of vertices, Centre of mass and // were tested separately.  Chapter  6.  Experimental  101  Results  Source  Error  F C V D L S P N A B K FV CV VL VS LS FP DK SK NK e  Total  T  ANOVA - Success Rate for "G" dof Sum of Squares Mean Squares 1 0.285746 0.285746 1 0.936064 0.936064 1 4.854962 4.854962 1 0.336838 0.336838 1 0.153316 0.153316 1 1.959691 1.959691 1 0.010225 0.010225 1 0.000151 0.000151 1 0.003017 0.003017 1 0.012117 0.012117 1 0.034409 0.034409 1 0.215700 0.215700 1 0.985445 0.985445 1 0.183693 0.183693 1 1.028707 1.028707 1 0.402762 0.402762 1 0.284569 0.284569 1 0.232587 0.232587 1 0.377889 0.377889 1 0.196092 0.196092 171 11.514180 0.067334 24.008162 . 255 356.551289  F 4.243685 13.901728 72.102274 5.002460 2.276939 29.103864 0.151860 0.002237 0.044806 0.179959 0.511024 3.203419 14.635093 2.728065 15.277592 5.981522 4.226205 3.454211 5.612131 2.912214 1.000000 1.000000  Table 6.8: Analysis of Variance summary for for main factor effects and significant 2 factor interactions for proportion of success for "G" part for the push planner.  6.3.3  " G " Push Planner Results  The most noticeable difference between the "G" and the other parts is the extreme sensitivity to noise in part features (table 6.8). All of the noise factors F, C and V have significant negative effects and they interact with each other and with other factors in significant ways. The oddest effect is perhaps that noise in the friction measurement is aggravated by minimizing push distance (interaction F P ) . Perhaps shorter push distances imply steeper fence angles and, on average, these are more likely to depend critically on friction vectors.  Chapter 6. Experimental Results  ANOVA - Planning Time for "G Source dof Sum of Squares Mean Squares F 1 7.568230 7.568230 1 C 1066.906954 1066.906954 V 1 2086.110470 2086.110470 D 1 700326.639715 700326.639715 L 1 290780.451650 290780.451650 S 1 27466.501954 27466.501954 P 1 58172.917588 58172.917588 N 1 423.630730 423.630730 A 1 59838.842475 59838.842475 B 1 2799.181512 2799.181512 K 1 22911.552432 22911.552432 1 FC 16672.458604 16672.458604 DL 1 251321.533517 251321.533517 DS 1 26158.008057 26158.008057 1 LS 20533.397319 20533.397319 DP 1 51746.297354 51746.297354 LP 1 17162.364610 17162.364610 1 SP 11683.132840 11683.132840 DA 1 54510.089220 54510.089220 LA 1 27550.812744 27550.812744 DK 1 20508.328387 20508.328387 LK 1 15717.062293 15717.062293 AK 1 16126.089715 16126.089715 Error £ 168 573972.617630 3416.503676 Total T 255 2269542.495997 664.288030  F 0.002215 0.312280 0.610598 204.983429 85.110534 8.039360 17.027032 0.123995 17.514643 0.819312 6.706140 4.879977 73.561031 7.656368 6.010062 15.145980 5.023371 3.419617 15.954934 8.064037 6.002724 4.600335 4.720056 1.000000 1.000000  Table 6.9: Analysis of Variance summary for for main factor effects and significant 2 factor interactions for planning time for "G" part for the push planner.  Chapter  6.  Experimental  Results  103  The strongest non-noise effect is steepness minimization S which is positive. D again is a significant negative effect and L , while not above the significance test is also negative. The positive interaction V L merely tells us that L has a negative effect for the low noise condition. The negative effect of VS indicates S"s positive effect at low noise. For LS,  detailed examination shows that the effect of 5" is positive for both high and low  levels of L, so we chose to set L low because of its moderately high individual effect. Although K might have a small positive effect at low N and D settings, it imposes a negative effect at high S, and since S must be high K was set low.  Sensitivity for "G" Push Planner In part because the other parts demonstrated significant interaction in D and L, we performed our optimization for D over D £ (25,35) and L £ (25,35), resulting in the surface illustrated in figure 6.9. Clearly push orienting with a success rate of about 10%  Figure 6.9: Plot of orienting success versus fence angle discretization and angle limits for "G" and push planner. is unsatisfactory, but upon examination of the outcomes in appendix A.3.1, we can see  Chapter 6. Experimental Results  104  that any treatment under which noise was applied achieved very poor success. The "G" with 20 vertices has too many points where it can fail in any given push since several alignments occur for each push action.  Figure 6.10: Extended plot of orienting success versus fence angle discretization and angle limits for "G" and push planner. We attempted to proceed with the search in D and L with the results indicated in figure 6.10, but still failed to achieve better than a 20% success rate. Finally in order to test plan robustness we chose one of the original treatment plans with S and P high (but without noise factors) which achieved perfect success. The results are shown in figure 6.11. The pronounced sensitivity of the "G" push orienter to noise is demonstrated by the fact that the rate of success falls below 10% with perturbation of the vertices with standard deviation of about 2mm.  6.4  Vision and Manipulation — Parameters and Outcomes  Table 6.10 indicates the actual values assigned to the parameters of the vision system for the treatments specified for our fractional factorial experiment. Camera position M,  Chapter 6. Experimental Results  T  0 I 0  1  1  0.01  1  0.02  105  1  1  1  1  1  1  1  1  1  1  1  1  0.03 0.04 0.05 0.06 0.07 0.08 Standard Deviation of Features as Proportion of Part Radius  r  I  0.09  1 0.1  Figure 6.11: The plots in the above graph represent the probability of successfully orienting the "G" using the push planner, given a normally distributed perturbation of model features with standard deviation indicated as fraction of the maximum dimension of the part. Friction /J, perturbations had standard deviation equal to the proportion of the nominal friction value. Perturbation of vertices, Centre of mass and fi were tested separately.  Chapter 6. Experimental Results  106 Parameter Levels Label High Value M tripod  Parameter Camera Position Disparity Range Sigma  D S  Calib. Accuracy Warp Accuracy Lighting  C A L  Z Constraint Discretization  Z P  Window Size  18 1.2 (8 x 8) 100-180 points 12 regions Special lighting (2 x 150W) ±5cm 20 360 10 20  T / L - Rho G - 2TT T / L - Orient. G - Gap  Q  Low Value fixed on robot frame 8 0.645 (4 x 4) 30-50 points 2 regions Standard room lighting ±20cm 8 60 4 8  Table 6.10: Parameter settings for vision factorial experiment. Camera positions are either fixed to the robot frame at the same level as the robot base, or closer to the work surface, but farther apart, on a pair of tripods. T, L and G are used to indicate the triangle, "L" and "G" parts for factors P and Q. calibration accuracy C and lighting L have been treated as qualitative factors in this experiment.  See appendix A.3.2 for tabulated treatments applied and probability of  success outcomes observed. Tables 6.11 through 6.16 contain the analysis of variance computations for the proportion of success and time required for visual part localization (in seconds) respectively, for the three parts under consideration. All main effect values are indicated but two factor interactions with F value less than F = 2.79 for a = 0.1, have been pooled. We a  are primarily concerned with rate of success, but again we include timing ANOVA tables as an empirical version of how various parameter settings contribute to vision system complexity.  6.4.1  Triangle Vision-based Orienter Results  The triangle localization using vision performed rather poorly, with a success rate of about 15 — 18%. This may have been because the system was more closely tuned to the  Chapter  6.  Experimental  Source M D S  c A L Z P Q CA ML Error Total  107  Results  £  T  A N O V A - Success Rate for Triangle dof Sum of Squares Mean Squares 1 0.000005 0.000005 1 0.000912 0.000912 1 0.000001 0.000001 0.000044 1 0.000044 1 0.000825 0.000825 1 0.005756 0.005756 1 0.001641 0.001641 1 0.007683 0.007683 1 0.002153 0.002153 1 0.003560 0.003560 1 0.010786 0.010786 0.071222 73 0.000976 0.104587 107.198384 127  F 0.005005 0.934771 0.000556 0.045043 0.845798 5.899456 1.682143 7.874653 2.207083 3.648443 11.055433 1.000000 1.000000  Table 6.11: Analysis of Variance summary for for main factor effects and significant 2 factor interactions for proportion of success for vision orienter for triangle part. "L", or because of the low contrast between the grey triangle and the blue work surface. Nonetheless the effects which test significant for the triangle are a negative effect for lighting L , and a positive effect for P the discretization of the p range for the Hough transform (table 6.11). The interaction CA is relatively small and the detailed calculation suggests that if A is low C should be low and if C is high A should be high. Since they are individually insignificant and they are both expensive calculations we will set them low. M L is a significant interaction of camera position and lighting. Our experience with the tripod camera position (M high) was quite poor. The viewpoints we chose, although closer to the work surface were more oblique and caused the parts to obscure themselves. The detailed calculation for the effect suggests that for M low, L should be low and this was the setting we chose.  Chapter 6. Experimental Results  Source M D S c A L Z P Q MA SL MZ AZ MQ LQ ZQ PQ Error Total  e  T  ANOVA - Extraction Time for Triangle dof Sum of Squares Mean Squares F 1 2051.788812 2605.899410 2605.899410 1 0.786931 0.999451 0.999451 1 1.968038 1.549561 1.968038 1 0.631282 0.801767 0.801767 1 10.533195 10.533195 8.293448 1 52.116861 52.116861 41.034889 1 10.278770 10.278770 8.093123 1 311.473844 311.473844 245.242984 1 308.207424 308.207424 242.671126 1 14.860634 14.860634 11.700713 1 4.187137 5.317925 5.317925 1 8.240361 8.240361 6.488155 1 18.276036 18.276036 14.389875 1 6.761198 6.761198 5.323517 1 4.201225 4.201225 3.307889 1 3.793764 4.818316 4.818316 1 72.531211 72.531211 57.108393 67 85.094167 1.270062 1.000000 127 3522.379833 2773.391600 1.000000  Table 6.12: Analysis of Variance summary for for main factor effects and sig nificant 2 factor interactions for extraction time for triangle part  Chapter 6. Experimental Results  109  Sensitivity for Triangle Vision System For optimizing the vision system for the triangle we chose the parameter settings with all factors low except P the p discretization parameter. For P we did a search around its high setting to determine the best value. T  1  1  1  The results of this search are presented 1  1  1  1  1  Success for Varied Rho Scale  25  20  I  3  15  10  j  i  20  25  i  i  i  i_  i  L  50  55  5 10  15  30 35 40 Discretization for Hough Rho  45  Figure 6.12: Plot of vision system success for triangle part versus values for p discretization for local Hough Transform. in figure 6.12 where we observe the best P value is 48. The corresponding growth in extraction time with D is plotted in figure 6.13.  6.4.2  "L" Vision-based Orienter Results  The significant main effects for the vision system for "L" are M , D, S and P. M is the large negative effect resulting from the poor tripod position (table 6.13). D the disparity range, S the a value, and P the size of the orientation template are all positive effects. The pronounced effects for M, S, and P allow us to set them independently of MS and MP. Examining LP and SL, we see that SL is positive and thus the very significant effect S votes for L positive as does the basic (though insignificant) L effect. The less significant interaction LP will not be considered in choosing L high. The overall positive effect of Q, given other parameter settings, is small, and high Q increases computation time so Q will be set low.  Chapter 6. Experimental Results 100  i  1  1  r-  1  1  1  1—  Extraction Umc for Varied Rho Scale •  30 35 40 Discretization for Hough Rho  Figure 6.13: Plot of time for triangle part visual localization (total for 120 trials) versus values for p discretization for local Hough Transform.  Source M D S C A L Z P Q MS SL MP LP MQ LQ Error  PQ e  Total  T  ANOVA - Success Rate for "L" dof Sum of Squares Mean Squares 1 0.275808 0.275808 1 0.010786 0.010786 1 0.526381 0.526381 1 0.000396 0.000396 1 0.002153 0.002153 1 0.003738 0.003738 1 0.005317 0.005317 1 0.195964 0.195964 1 0.000287 0.000287 1 0.011724 0.011724 1 0.045942 0.045942 1 0.018568 0.018568 1 0.008477 0.008477 1 0.020630 0.020630 1 0.014063 0.014063 1 0.015495 0.015495 68 0.159748 0.002349 127 1.315477 559.960591  F 117.403444 4.591346 224.065112 0.168356 0.916606 1.590955 2.263457 83.416262 0.122168 4.990413 19.556348 7.903969 3.608459 8.781546 5.986232 6.595917 1.000000 1.000000  Table 6.13: Analysis of Variance summary for for main factor effects and significant 2 factor interactions for proportion of success for vision orienter for "L" part.  Chapter  6.  Experimental  Results  Source M D S C A L Z P Q MA SL MZ MP ZP Error  e  Total  T  ANOVA - Extraction Time for "L" dof Sum of Squares Mean Squares 1 1297.984661 1297.984661 1 3.314353 3.314353 1 65.833441 65.833441 1• 0.003656 0.003656 1 0.130678 0.130678 1 8.391034 8.391034 1 18.921739 18.921739 1 157.838372 157.838372 1 160.624250 160.624250 1 6.859214 6.859214 1 76.713626 76.713626 1 8.799960 8.799960 1 32.892755 32.892755 1 5.575727 5.575727 70 99.120636 1.416009 127 1943.004101 1372.169229  F 916.649949 2.340630 46.492244 0.002582 0.092286 5.925833 13.362724 111.467062 113.434477 4.844046 54.175942 6.214621 23.229198 3.937635 1.000000 1.000000  Table 6.14: Analysis of Variance summary for for main factor effects and nificant 2 factor interactions for extraction time for "L" part.  Chapter 6. Experimental Results  112  Sensitivity for "L" Vision System The parameter settings we have settled upon then are D, S, L and P high, and the other factors low. We can optimize on D the number of disparities, and P the p discretization factor, independently since our analysis indicates no significant interaction between them. Figure 6.14 illustrates our search over D € (15,35) for the best number of disparities. For the current set of coefficients the DATACUBE system could not search more than 35 disparity levels. 68  66  64  62  I  I  60  58  56  54 15  20  25 N u m b e r o f Disparities for Correlation Stereo  30  35  Figure 6.14: Plot of success for "L" part versus number of disparities for correlation stereo. The  best observed number of successes out of 120 trials was 66 for 33 disparities. It  is unclear why there were such erratic swings in the number of successes (or extraction time in figure 6.15) as the number of disparities increased, unless more disparities caused greater fragmentation of 3-D line segments. Having chosen the best observed disparity level D = 33, we will now search for the best possible p discretization value. The search over P £ (10,60) is illustrated in figure 6.16 The  best success rate (72 of 120) was found in the range P £ (45,47) so we set P = 45.  Figure 6.17 illustrates the increase in extraction costs for the range of P and D = 33.  Chapter 6. Experimental Results  1  20  25 Number of Disparities for Correlation Slcrco  30  Figure 6.15: Plot of time for "L" part localization (for 120 trials) versus number of disparities for correlation stereo.  Success for Rho Scale  10  1  1  1  15  20  25  1  i  i  30 35 40 Rho Discretization for 33 Disparities  i 45  i  i_  50  55  Figure 6.16: Plot of success for "L" part versus rho discretization for local Hough Transform for 33 disparities.  Chapter  6.  Experimental  114  Results  P  Rho Discretization for 33 Disparities  Figure 6.17: Plot of time for "L" part visual localization (for 120 trials) versus rho discretization for local Hough Transform for 33 disparities.  6.4.3  "G" Vision-based Orienter Results  The "G" localization strategy is essentially a 2-D process of searching for a gap in an oval shape, it therefore performs very badly under the tripod camera positions (table 6.15). This is illustrated by the extreme negative effect of the camera factor M. For the "G" part the system prefers the smaller a S and structured lighting L. P, in this case the discretization of 2VT used to classify depth points with respect to the centroid, is a large positive effect. For the "G", Q is the number of histogram bins in the window over which a minimum number of hits is sought, and it has a large negative effect. PQ has a large positive effect, but we chose to look at this interaction by searching around high P and low Q as a result of their individual effects. Since all interactions including M tend to have near zero successes for M high, effects ML, MP and MQ simply reflect the main effects of L, P, and Q.  Sensitivity for "G" Vision System The selected parameter settings are L high, P high and S low. In order to save time we used a set of previously generated disparity images which also had D high. This is not inconsistent since D did have a small positive effect, although since it is insignificant  Chapter 6. Experimental Results  Source M D S  c A L Z P Q ML MP MQ Error Total  PQ e T  ANOVA - Success Rate for "G" dof Sum of Squares Mean Squares 1 6.261309 6.261309 1 0.010546 0.010546 1 0.055840 0.055840 1 0.000983 0.000983 1 0.005060 0.005060 1 0.079761 0.079761 1 0.001401 0.001401 1 0.414544 0.414544 1 0.289213 0.289213 1 0.083613 0.083613 1 0.403014 0.403014 1 0.279595 0.279595 1 0.723223 0.723223 71 1.028086 0.014480 127 9.636188 665.478486  F 432.408151 0.728344 3.856329 0.067915 0.349452 5.508290 0.096759 28.628584 19.973149 5.774333 27.832254 19.308928 49.945996 1.000000 1.000000  Table 6.15: Analysis of Variance summary for for main factor effects and significant 2 factor interactions for proportion of success for vision orienter for "G" part.  Chapter 6. Experimental Results  Source M D S c A L Z P Q DS ML ZP ZQ PQ Error  £  Total  T  ANOVA - Extraction Time for "G" dof Sum of Squares Mean Squares 1 4573.633186 4573.633186 1 3.597744 3.597744 1 1.995649 1.995649 1 13.757258 13.757258 1 3.760284 3.760284 1 386.164376 386.164376 1 229.452234 229.452234 1 39.623771 39.623771 1 241.778629 241.778629 1 112.354725 112.354725 1 208.614438 208.614438 1 268.407018 268.407018 1 446.944817 446.944817 1 1791.573802 1791.573802 70 2011.280873 28.732584 127 10332.938803 359.624419  F 159.179321 0.125215 0.069456 0.478803 0.130872 13.439946 7.985785 1.379054 8.414789 3.910359 7.260553 9.341555 15.555330 62.353383 1.000000 1.000000  Table 6.16: Analysis of Variance summary for for main factor effects and nificant 2 factor interactions for extraction time for "G" part.  Chapter 6. Experimental Results  117  Success for Gap/Discretization  Discretization of 2*Pi  Figure 6.18: Plotting success rates for ranges of the gap window versus discretization values for 2ir we can observe a pronounced cyclical trend, for the "G" vision system. it does not matter how we set it. Clearly from table 6.15 P and Q have strong effects and significant interactions. We did a search about P's high value and Q's low value to produce the surface illustrated in figure 6.18. The plot demonstrates that there is an obvious cyclical relationship between gap size and discretization. We performed a further search in the peak region of each contour to produce the detailed plots in figure 6.19. No doubt this recurring relationship is related to the fact that the gap in the actual part has a width of about 50 degrees. The system performs better in ranges of gap and discretization related to this 50 : 360 ratio. We found two scenarios where 100 successes out of 120 trials were achieved: at P — 120 and Q = 13 and at P = 72 and Q = 6. For our choice of a best parameter set, we set the parameters to the smaller values because  Chapter 6. Experimental Results of the cost associated with the search for the minimum gap window. Succcu for DISC-60  region  Suctcw for DISC-70 region  Succcw lor D1SC-IW) region  DiicreJialion of 2*Pi  Figure 6.19: The plots above are detailed regions of the full plot of figure 6.18, in the regions disc=60, 70, 90, 120, and 180, where peaks occurred in the original plot.  Chapter 7 Analysis of Results  In this chapter we review the results, collected in chapter 6, for the factorial experiments and sensitivity analysis used to determine parameter values for our two part orienting systems. We discuss the surprising variety of optimal values occurring for different parts; part information clearly manifests itself in these part specific parameterizations. Finally we review how £-complexity can be applied to these systems as a cost/performance metric.  7.1  Optimization of Information for Tasks  In chapter 6 we demonstrated the usefulness of treating robotic systems as complex parameterizations of tasks. We used factorial experiments to identify significant parameters and interactions among parameters, and then used sensitivity analysis to set the optimal values for those significant parameters. For the three parts studied (the triangle, "G" and "L") each of the push orienting and vision based orienting systems demonstrated different behaviours and different optimal parameters. For the push planner we saw that plans for the triangle were primarily sensitive to variations in centre of mass, and that minimizing plan steepness and eliminating a large range of steep angles from the plan search were beneficial to plan success (factors C, S and L). We examined the interaction of fence angle discretization D and L and determined optimal values for these parameters to be D = 26 and L = 35. The result is a plan relatively insensitive to friction and centre of mass variation, but sensitive to vertex perturbations with standard deviation greater than 4% of part dimension. Most of the information relevant to this task appears to be contained in parameters S, L and V.  For the "L" part under the push orienter the most significant parameters are cen-  119  Chapter 7. Analysis of Results  120  tre of mass variation C, fence angle discretization D, the limit on steep angles L and minimization of plan length N.  Again we optimized D and L to set their values at  D = 30 and L = 31. The "best" plan for orienting the "L" was insensitive to friction variation, but success dropped off steadily and immediately with perturbations (with standard deviations as small as 1% of part dimension) in centre of mass and vertices. The "G" fared badly under push orienting with extreme sensitivity to the noise parameters F, C, and V . Other significant parameters included D and S again, fence angle discretization and steepness minimization. For consistency a search for optimal D and L parameters was made, but since optimization is performed in the presence of noise; success rates were below 20% and therefore little progress could be made. A plot of the sensitivity of a "good" plan to noise showed a success rate falling to 10% for perturbations with standard deviation as small as 4% of part dimension in the vertices. The relevant information for this part and orienting method is captured by F, C, V, D and 5". The vision system also showed varied performance with the different parts. The triangle had rather poor success with the vision system, perhaps because it required the greatest positional accuracy for a successful grasp. The most significant parameters according to our analysis were the lighting conditions L, and the model parameter P controlling discretization of p for the local Hough transform. We performed a search on P to determine the optimal value P = 48. The vision system was moderately successful with the "L". The most significant parameters were camera placement M, number of disparities D, the a value for \/ G , S 2  :  and P the p discretization value. In the optimization process we did a search over the number of disparities searched to determine D = 33. Having fixed D we determined the optimal value P = 45. It is interesting that variation of 10% in measured part parameters can drive the push planner success rate for the "L" to the same 60% level of our optimized vision system. The vision system would be unlikely to notice such small perturbations since it uses a gross model. For the "G" the relevant parameters are M , 5, L, P and Q. Model parameters P and Q now indicate the discretization of 27r orientations and the gap width in histogram bins  Chapter 7. Analysis of Results  121  for the "G" respectively. A search in P and Q demonstrated the relationship between the width of the real gap as a fraction of the circle, giving cyclical peaks in P — Q —space. The best values selected were P = 72 and Q = 6. This optimized parameterization achieved a success rate of 83%. In comparison, a perturbation with standard deviation of 0.5% of part dimension in vertices results in a lower success rate for the push planner. Again the vision system is unlikely to be sensitive to such small perturbations in true part shape. Clearly each of the part orienting systems' interactions with the parts is different; reflecting different aspects of the information involved. This is true even for the basic manipulation operation of orienting a part. Although, for example, the triangle and the "L" would appear very similar under both the pushing and vision models used, analysis demonstrates that the performance of each system for each part and the parameters which exhibit significance are quite different. The push planner has perfect performance at orienting all three parts under noisefree conditions, however even small errors or variations in the centre of mass or vertices quickly degrade success rates. As parts become more complex this brittleness with respect to world variance from the predefined model becomes more pronounced. The vision system has much higher error rates, but is never tested with perfect data. At each step (calibration, epipolar approximation, edge detection, grouping, line fitting) more noise is introduced. Nevertheless the vision system appears to be able to produce comparable performance to the push planner for the "G". Whatever the claims for the robustness of push orienting, the results of the experiments described here reflect the problem of models that cannot easily account for the true degree of variance in the world. That is, model-based systems must account for variations in the world, and the greater the complexity of the models, as demonstrated by the "G" under pushing, the more difficult it is to account for all possible variations. For our vision systems the models were not very precise, and so we would expect them to tolerate an equivalent degree of uncertainty in the real parts. The most significant lesson learned from this analysis is that extensive analysis of actual robotic implementations is a necessary step to understanding such systems. This  Chapter 7. Analysis of Results  122  type of analysis reveals aspects of the implementation which cannot be predicted a priori. It also clearly defines what information is necessary for a particular implementation, and what the strengths and weaknesses of these implementations are.  7.2  Cost/performance Analysis  For our metric on system cost versus performance we draw on the notation and model of computation used in information-based complexity. Information based complexity attempts to integrate the costs of using noisy, partial, priced information in approximation tasks, into an overall complexity measure computed with respect to the error e achieved in the approximation. Clearly we can view many tasks in sensing and robotics as attempts to approximate some ideal task function to within acceptable error limits, given partial noisy information about the world. The lines between sensing and action become blurred when we think of measurements as the result of instruments interacting with the physical world, and the areas of active vision and tactile sensing. In the following discussion we apply the an analysis based on the e-complexity formalism to our part orienting systems as a useful descriptive tool allowing a comparison of their performance. We observe however that having attempted to stretch other formalisms such as "part entropy", "information invariants" and decision theoretic models to describe and compare our part-orienting systems, information based complexity seems the most promising option for making formal statements about the relative complexity of robotic tasks. 7.2.1  The Formalism  To describe a problem in the e-complexity formalism we require a precise problem formulation; a specification of allowable information and a model of computation [68, 72]. Generally problems are denned with respect to a solution operator 5, for some set F and normed linear space G; S : F -> G.  The elements of F are referred to as problem elements. For our part orienting problems F — {Pi)i where p; specifies the properties of a particular instantiation of a part including  Chapter 7. Analysis of Results  123  such things as precise shape, centre of mass, and initial world location. The problem is to transform any part from the range of possible parts, in the range of possible initial positions, to a desired final position 0 £ G. g  The information available to our two systems is partial and noisy. In general, information is described by  y = N(p), N(p) =  [L (p),L (p),...,L (p)] 1  2  n  where L{ are information operations. Generally N(p) is an equivalence class where some range of p values are mapped to the same y due to the limits of measurement or arithmetic precision. An approximation U is a pair ((/>, N) where <fi is the computation used to approximate s .  4> : N(F) -)• G For some p then, the absolute error for U is  (U) =  e  \\S( )-t(N(p))\\ P  If the absolute error criterion is being applied then to be an ^approximation, HS'(p) — 4>(N(p))\\ < e,Vjo £ F. The cost of an approximation at a point p is the sum of its information cost and its combinatoric cost.  cost(U,p) = cost(N,p) + cost(<j>,N(p)) The model of computation generally used in the e-complexity formalism is to attribute a cost ci > 0 to each L{ and a cost of 1 to each arithmetic or comparison operation. For our purposes we will also include a constant cost c for each robot arm motion required 2  by the orienting system. This is a useful way for us to attribute the costs of each method, but is probably oversimplified for analysing the relative performance of robotic programs in detail. Finally there are three possible cases in which we can examine complexity: worst case; average case and probabilistic. We will only concern ourselves with the worst and  Chapter 7. Analysis of Results  124  average cases. For the worst case setting error and cost are defined by  e(U) = sup \\S(p)-<j>(N(p))\l peF  cost(U) = s u p  p£F  cost(U,p).  In other words, the overall error is the largest error for all parts in F, and the cost is the greatest cost for all parts in F. For the average case setting a probability distribution [i is assumed to be known a priori on the set F. Error and cost are denned as follows: e(U) =  y/f \\S(p)-<KN(p))\Mdf) F  cost(U) = f cost(U,p)fj,(df) F  7.2.2  Vision Orienting  The vision system used by our orienter Ui can be described in the e-complexity framework as using a single function evaluation N(p) = {L (p)} which is the image of dis0  parity marked edge pixels delivered by the DATACUBE system. These are essentially noisy data, which are operated on to generate an approximation of part location. The information cost per approximation then is c  1?  as denned above.  The computational cost of the preprocessing for our vision system is significant. First a set of calibration data points are collected, using the robot arm as part of the measurement system. This requires about 200 robot motions to collect up to 180 calibration points. Extracting the marker on the arm from the image pair requires a search of up to 512 • 480 pixels in each of a stereo pair of images. For calibration points C and image size M , collecting the calibration data costs (7.1) where c is the cost of moving the arm (as defined above). 2  The next step in starting up the vision system is to approximate the epipolar warp. Histograms of image locations and disparities are generated by sampling a workspace region and projecting points into the images. Singular value decomposition is used to approximate the epipolar warp for successively smaller binary subdivisions of the image, until the error is sufficiently low, or the upper bound b on regions is reached. If the upper  Chapter 7. Analysis of Results  125  bound is reached all of the approximations in the binary tree have been computed giving us a = 2'° ( ' — 1 approximations of complexity s, computed. Finally the warp window 32  6  +1  is expanded by computing the approximated source pixel for every pixel in the left image window. For workspace sampling S • S • S , and image window M then, the complexity x  y  z  of the precomputation is warp approx. cost = S • S • S + a • s + M x  y  (7-2)  z  Finally, considering the more complicated the extraction of the right angle (vs "G") part position requires processing d «  M disparity marked edge pixels, grouping them  into 2-D lines and fitting 3-D lines to them. This requires an initial pass through the image window, an application of the Q • Q orientation template for each edgel and a second pass through the edge list for grouping. Groups of pixels g are then fit with a line in 3-D with cost / , and finally each line is compared with the others to find a perpendicular pair. The combinatoric cost for this process is  cost(<f>,N( )) = M + d-Q-Q + d + g-f + g-g  (7.3)  P  The total cost for the part orienting system for orienting P parts in its lifetime is cost(/7 ) = C • (c + 2 • M) + S • S • S + a • s + M +P • (M + d • Q • Q + d + g • f + g • g + c + 2 • c ), 1  2  x  y  z  l  {  2  '  where, for each part oriented, we have information cost c i for receiving a disparity image, and we have added cost 2 • c to represent the two motions required to pick and place the 2  part once it is localized. Error in the vision system comes from noise in the process N(p) rather than from a lack of discrimination, and is therefore difficult to evaluate exactly. Figures 7.1 and 7.2 illustrate observed position (L -norm) and orientation localization error, for the vision 2  system's optimal "G" orienting parameterization. Experimentally then, we conclude that the average case position error is e avg  e vg-or a  pos  = 0.84cm, the average case orientation error is  = 0.048rad, and the worst case errors are e ^  respectively.  wor  pos  = 5.26cm and e wor  or  = 0.98rad  j  Chapter 7. Analysis of Results  126  30  1  1  Histogram of Position Error  25  20  10  EL  i lM l rm ,n n m 4 5  3  Position Error (cm)  Figure 7.1: Histogram of position error (Euclidean distance) for vision system with "G" part for 120 images/part positions. The average X error was 0.55cm the average Y error 0.64cm, the maximum error was 5.26cm the minimum 0.088 cm.  Chapter 7. Analysis of Results  Orientation Error (rad)  Figure 7.2: Histogram of orientation error for vision system with "G" part for 120 images/part positions. The average orientation error was 0.048 rad., the maximum error was 0.98 rad. the minimum 0.00018 rad.  Chapter 7. Analysis of Results 7.2.3  128  Push Orienting  Our application of the notation of information-based complexity to the push orienting system is a stretch of the normal usage of the formalism. Clearly, plans are computed under the assumption that knowledge of the parts to be oriented is complete and exact, very much a combinatoric approach. We argue however that the resulting sequence of pushes can be appropriately analysed in this way since inaccurate or noisy part information in turn affects its performance. Every push has a cost in terms of time, space and energy. We can view it as an information function L(jpi) on each part passing through the system. The "value" returned is a cw or ccw rotation and alignment, yielding a more restricted set of possible part orientation states, much in the same way a binary search in a continuous domain progressively restricts a value to an acceptably small interval [68]. Noise or deviation from expected canonical part parameters causes the part to give erroneous "answers" to these questions posed by the pusher's fence angle. We will apply our analysis to this mechanical computation. Let us examine U = (<f> , N ) the push planner implementation of part orienting. Let 2  2  2  each element of F be a vector representing a part, including the Uj = (x,y) position of each part vertex, the C = (x,y) position of the centre of mass, the coefficient of friction between the part and the fence / , and the initial part orientation 0,-. p € F then is an instantiation of an incoming part to the push planner. G is the space of outcome orientations for the part. S is an ideal plan which maps a part with properties p to the goal orientation 6 . g  The mapping <f> is a push plan precomputed for some po € F. We assume that all the possible parts in F have vertices V{, centre of mass C and coefficient of friction / , normally distributed with mean zero (N(0,o)), about the corresponding features of p . 0  In other words consider all incoming parts to be in the same equivalence class N(p ) with 0  PoNow as we have argued above we will assume that each push is an information operation Li. The cardinality n(p) of N(p) is then the same as the length of the plan </>. The push orienter is a fixed approximator to the orienting problem for a part p and desired 0  Chapter  7.  Analysis  of Results  129  outcome angle 9 , i.e. we assign a fixed cost to each physical push operation for a fixed g  length plan and thus the cost of N(p) is fixed. Let $i = g(9i-i,oci,p), be the decision computation which maps #_i to  given that  t  fence angle OL{ is applied to part p in state  A push sequence is then a cascade of  these transitions of the form 0/ = g(g(-g(0o,a:o,p),...),o:j,p). As we have seen in the experiments of chapter 6, parts which vary from those on which plans are based cause errors in outcome orientations because they rotate in unexpected ways at critical points (see figure 4.3). For any particular plan <f> the overall worst case error will depend on the worst outcome 9j of the possible bad rotations for the set of possible parts F. As above e (U)  =  cost(U)  = sup  wor  sup \\S(p)-ct>(N(p))\l peF  p£F  cost(U,p).  Similarly the average case error is defined *av (u) =  y/S \\s{p)-<KN(p))\\M<V)  B  cost(U)  F  = f  cost(U,p)u,(df),  F  where fx is the same distribution Af(0, CF) indicated for part variation. Since our push planner always executes the same plan independent of the particular pi, the cost for both the worst and average cases is fixed at c • k for plan of length k. 2  There are essentially two parts to the push planner: the precomputed planning stage which is lengthy and has dominant combinatoric complexity, and the actual push execution stage where straight line push motions are generated via spline approximation. The initial stage of the planner sets up the configuration maps and code sets which form the basic data structure upon which the planner operates. Essentially this requires testing the outcome of each of the available discretized fence angles for each of a fixed discretization of the input part angles q. The number of fence angles is B — ^ , where 7r  2L  )  L and D are the steepness limits and fence discretization factors tested in chapter 6. This preprocessing requires on the order of q • B operations. The planner performs recursive  Chapter 7. Analysis of Results  130  800  700 -  -1  0 Push Orientation Error (rad)  1  Figure 7.3: Histogram of orientation error for push plan with "G" part for 1000 trials on part positions, and 0.5% standard deviation perturbation in all part features. The average orientation error was 0.10 rad., the maximum error was 0.52 rad. the minimum 0.00 rad. tree search to fixed depth ri, with branching B, the number of fence angles. The planner then requires on the order of B operations. d  At each step of the action sequence, a sequence of spline approximations are required to generate the straight line motion required for push planning. We will lump this cost along with the other real costs of generating motion in the physical world into the constant c . Thus, for the push planner we define the total cost over the lifetime of the system as 2  cost(U ) = q-B + B + P-k-c ,  (7.5)  k  2  2  where P is the total number of parts to be oriented by the system. Empirical bounds on the term 0(qB + B ) can be observed in the time outcome tables of appendix A. k  In determining e we look back to figures 6.6, 6.8 and 6.11. The perturbations indicated in these graphs essentially tell us the size of the family of parts in F which are in the equivalence class for p (or e oc % — part-dimension ). For the worst case, e 0  wor  will tend  Chapter 7. Analysis of Results  131  to go to the resting position for pi on the final fence which is "furthest" from 9 , almost g  immediately and is bounded by this value. Figure 7.3 shows a histogram of outcome orientation errors for the "G" with feature perturbations of standard deviation 0.5% of part radius (~ 0.03cm). We can see that distinct peaks corresponding to resting positions form. The observed worst case error is  e  wor  = 0.52rad. The average case is related to the  falling success rates illustrated in the indicated tables. The observed average error was e  avg  =  7.2.4  O.lrad.  Summary  Equations 7.4 and 7.5 give us a concrete cost comparison for the particular implemented push planning and vision systems. Both systems have a clear cost associated with precomputation or calibration. For the vision system this computation is dominated by the calibration sequence. With the push planner, the tree traversal associated with finding plans dominates. Both systems also have a cost associated with each individual part orientation. For the push planner the major component is the cost c , of each push motion. For the 2  vision system sensing and combinatoric costs could approach the magnitude of the pick and place actions c . When we include the constants C\ and c in our expressions for 2  2  the cost of our part orienting systems we should point out that C i is at least an order of magnitude more time consuming than a mathematical computation counted by the combinatoric complexity expression; and in turn c is correspondingly more expensive 2  than c\. With this in mind, when we examine equations 7.4 and 7.5, we can initially see that if the two systems had the same e error, the vision system would be much more efficient. However, only for the "G" part have we demonstrated that the vision system can approach the average error of the push orienter.  Chapter 8 Conclusion  8.1  Information for Tasks  Information used in robotic tasks can take many forms and be encoded not only in the implementation of algorithms but in engineered environments and sensing strategies. To determine what measurements or information are sufficient for a task implementation we must determine "what to model or sense", "how precisely" and "how accurately". In this thesis we have put forward a view of complex systems for robotic tasks as parameterizations of task functions. These parameters encode task information, usually in the form of features or combined measurements. In general, information is extracted by calibrated measurement of the physical world. For a particular robotic task goal, there should be some fundamental underlying information requirements. Comparing different implementations of the same goal can give us insight into these requirements. The main contribution of this thesis is a detailed method for comparing and evaluating disparate robotic systems, which clearly demonstrates how even small variations in the goal of a system can radically vary the focus of the information required by the system.  8.1.1  A Methodology for Comparing and Evaluating Task Implementations  Information in robotic tasks is only valuable with respect to task performance. By examining system performance we can compare disparate systems. We have drawn from a number of disciplines including design of experiments, analysis of variance, sensitivity analysis and information based complexity to devise a methodology for evaluating and comparing complex robotic systems in this way. The method we have described and demonstrated in this thesis has essentially four parts:  132  Chapter 8. Conclusion  133  1. Determine the task parameterization. 2. Use statistical factorial experiments to identify system parameters which are critical to task performance, allowing resources to be allocated appropriately. 3. Perform sensitivity analysis to determine optimal choice of parameters values. 4. Use cost-performance metrics for comparison of sensori-computational systems. These steps were applied to two very different part orienting systems, one based on push plans, the other a vision based pick and place system. Factorial experiments allowed us to identify critical parameters and parameter interactions, and sensitivity analysis helped us set appropriate parameter values with respect to orienting success. We have shown that this methodology is useful in extracting the critical aspects of task information for a particular implementation, and in providing a well defined structure in which to analyse and compare differing implementations.  8.2  Comparison of Task Implementations  Our observation in performing our analysis of the performance for the two orienting systems over a set of three parts, is that even such a simple task generated diverse responses in terms of which parameters and interactions were significant. Even within the same orienting systems responses were diverse. Clearly the individual part information and its inherent complexity play a major role in these systems. Experimental systems in vision and robotics have rarely been rigorously tested and documented. An important conclusion of this thesis is that it is vacuous to propose a robot solution without a clear analysis of its system parameters and how they interact with each other, and with task success. This is especially true as the task becomes more complicated. In the analysis of this thesis we have clearly demonstrated that it is these parameterizations that define the information content of the task. These results point the way toward a theory of practical and economic application of robotics in manufacturing and other industries. In other words we are developing a quantitative science of robotics.  Chapter 8. Conclusion  8.3  134  Future Work  The analysis presented in this thesis can and should be applied to more complex systems. We hypothesize that such an analysis of complex robotic implementations can reveal many unexpected sources or encodings of task information, which can yield more effective and well understood systems. One way of approaching large system implementations is to use the tools presented in this thesis to guide system design and optimization. We would also like to examine other aspects of optimally focusing information resources for particular robots and tasks in areas such as attending, navigating, windowing and tracking.  Bibliography  [1] James H. Abbs and Carolee J. Winstein. Functional Contributions of Rapid Automatic Sensory Based Adjustments to Motor Output. In M. Jeannerod, editor, Attention and Performance XIII: Motor Representation and Control, pages 627-652.  Lawrence Erlbaum and Associates, Hillsdale, N.J., 1990. [2] Eric W. Aboaf, Steven M. Drucker, and Christopher G. Atkeson. Task-Level Robot Learning: Juggling a Tennis Ball More Accurately. In Proceedings 1989 IEEE International Conference on Robotics and Automation, volume 3, pages 1290-1295,  Scottsdale, Arizona, May 1989. [3] Gerry B. Andeen. Toward a science of assembly. In Proceedings of the International Workshop on Some Some Critical Issues in Robotics, Singapore, 1995. US Army  Research Office (AMC) - Far East. [4] Nicholas Ayache and Charles Hansen. Rectification of images for binocular and trinocular stereovision. In 1988 Int. Conf. on Pattern Recognition, volume 1, pages 11-16, Rome, Italy, October 1988. [5] Ruzena Bajcsy. Active perception. Proceedings of the IEEE, 76(8):996-1005, August 1988. [6] Manuel Blum and Dexter Kozen. On the Power of the Compass. In 19th Annual Symposium on Foundations of Computer Science (FOCS), pages 132-142, Ann Ar-  bor, Michigan, October 16-18 1978. [7] George E. P. Box, William G. Hunter, and J. Stuart Hunter. Statistics for Experimenters: An Introduction to Design, Data Analysis and Model Building. John Wiley  and Sons, New York, N.Y., 1978. [8] Ronen I. Brafman, Jean-Claude Latombe, and Yoav Shoham. Towards knowledgelevel analysis of motion planning. In Proceedings of the 11th National Conference on Al (AAAI93), pages 670-675, Washington, DC, July 1993. [9] Rodney A. Brooks. Intelligence without reason. Technical Report A l Memo 1293, MIT A l Lab, April 1991. [10] Randy C. Brost. Automatic grasp planing in the presence of uncertainty. International Journal of Robotics Research, 7(1):3—17, February 1988. [11] Randy C. Brost. Analysis and Planning of Planar Manipulation Tasks. PhD the-  sis, Carnegie Mellon University, Pittsburgh, PA., January 1991. Appears as CMU Computer Science Tech Report CMU-CS-91-149. 135  [12] H. Bulthoff, J. Little, and T. Poggio. A Parallel Algorithm for Real-time Computation of Optical Flow. Nature, 337(6207):549-553, February 1989. [13] Michael Edward Caine. The Design of Shape From Motion Constraints. PhD thesis,  Massachusetts Institute of Technology, Cambridge, Mass, September 1993. Appears as MIT A l Lab Tech Report 1425. [14] John F. Canny and Kenneth Y. Goldberg. Robotics.  A "RISC" Paradigm for Industrial  In 1994 IEEE International Conference on Robotics and Automation  (ICRA94), volume 3, pages 1951-1957, San Diego, CA, May 1994. [15] T. L. Dean and M. P. Wellman. Planning and Control. Morgan-Kaufmann Publishers, San Mateo, Cal., 1991. [16] Aloke Dey. Orthogonal Fractional Factorial Designs. Halsted Press, John Wiley and Sons, New York, NY, 1985. [17] Bruce Donald and Jim Jennings. Sensor Interpretation and Task Directed Planning Using Perceptual Equivalence Classes. In IEEE International Conference on Robotics and Automation, Anaheim, Ca., 1991. [18] Bruce Randall Donald. On information invariants in robotics. Technical Report TR 93-1341, Cornell University, Ithaca, New York, April 1993. [19] R.O. Duda and P.E. Hart. Use of the Hough Transformation To Detect Lines and Curves in Pictures. Communications of the ACM, 15(1):11—15, January 1972. [20] Brian Scott Eberman. Contact Sensing: A Sequential Decision Approach to Sensing  Manipulation Contact Features. PhD thesis, Massachusetts Institute of Technology, Cambridge, Mass., February 1995. [21] Michael Erdmann. Action subservient sensing and design. In 1993 IEEE International Conference on Robotics and Automation, volume 2, pages 592-598, Atlanta, Georgia, May 2-6, 1993. [22] Michael Erdmann. Understanding action and sensing by designing action-based sensors. In Proceedings of the IEEE ICR A Workshop on Design, San Diego, Ca.,  May 1994. [23] John T. Feddema, OS. George Lee, and Owen Robert Mitchell. Weighted Selection of Image Features for Resolved Rate Visual Feedback Control. IEEE Transactions on Robotics and Automation, 7(1):31—47, February 1991. [24] R. James Firby. Task directed sensing. In Sensor Fusion II: Human and Machine Strategies, volume SPIE 1198, pages 480-489, 1989. [25] Gene F. Franklin and J. David Powell. Digital Control of Dynamic Systems. AddisonWesley Publishing Co., Reading, Mass., 1980. 136  [26] K.S. Fu, R.C. Gonzalez, and C.S.G. Lee. Robotics: Control, Sensing, Vision and  Intelligence. McGraw-Hill Book Company, New York, N.Y., 1987. [27] Kenneth Y. Goldberg and Matthew T. Mason. Bayesian grasping. In Proceedings of the International Conference on Robotics and Automation, pages 1264-1269. IEEE,  1990. [28] Richard Hartley and Rajiv Gupta. Computing matched-epipolar projections. In Proceedings 1993 Conf on Computer Vision and Pattern Recognition (CVPR93),  pages 549-555, New York, NY, 1993. [29] Berthold Klaus Paul Horn. Robot Vision. The MIT Press, Cambridge, Mass., 1986. [30] Marc Jeannerod and B. Biguer. Visuomotor Mechanisms in Reaching Within Extrapersonal Space. In D.J. Ingle, M.A. Goodale, and R.J.W. Mansfield, editors, Analysis of Visual Behavior. MIT Press, Cambridge, Mass., 1982. [31] William J. Karnavas, Paul J. Sanchez, and A. Terry Bahill. Sensitivity analyses of continuous and discrete systems in the time and frequency domains. IEEE Transactions on Systems, Man and Cybernetics, 23(2):488-501, March/April 1993.  [32] P. Kazanzides, J. Zuhars, B. Mittlestadt, P. Cain, F. Smith, L. Rose, and B. Musits. Architecture of a surgical robot. In Proceedings of the 1992 IEEE International Conference on Systems, Man and Cybernetics, volume 2, pages 1624-9, Chicago, II., October 1992. [33] Oscar Kempthorne. The Design and Analysis of Experiments. John Wiley and Sons, New York, N.Y., 1952. [34] Jak Kirman, Kenneth Basye, and Thomas Dean. Sensor abstraction for control of navigation. In Proceedings of the 1991 IEEE International Conference on Robotics  and Automation (ICRA'91), pages 2812-2817, Sacramento, Ca., April 1991. [35] Anthony Lazanas and Jean-Claude Latombe. Landmark-based robot navigation. In Proceedings of the 10th National Conference on Al (AAAI'92), pages 816-822, San  Jose, Cal., July 1992. [36] Susan J. Lederman and Roberta L. Klatzky. Hand Movements: A Window into Haptic Object Recognition. Cognitive Psychology, 19:342-368, 1987. [37] D.G. Lowe. Three-Dimensional Object Recognition from Single Two Dimensional Images. Artificial Intelligence, 31:355-395, 1987. [38] D. M. MacKay. The significance of 'feature sensitivity'. In Models of the Visual Cortex. John Wiley and Sons Ltd, 1985.  137  [39] A.K. Mackworth. Constraints, Descriptions and Domain Mappings in Computational Vision. In O.J. Braddock and A. C. Sleigh, editors, Physical and Biological Processing of Images, pages 33-40. Springer Verlag, Berlin, 1983. [40] A.K. Mackworth. Adequacy Criteria for Visual Knowledge Representation. Technical Report 87-4, Dept. of Computer Science, U.B.C., Vancouver, B.C. Canada, January 1987. [41] Matthew T. Mason. Mechanics and Planning of Manipulator Pushing Operations. The International Journal of Robotics Research, 5(3):52—71, 1986.  [42] Matthew T. Mason. Kicking the sensing habit. Al Magazine, pages 59-59, Spring 1993. [43] Maja Mataric. Integration of representation into goal-driven behavior-based robots. IEEE Transactions on Robotics and Automation, 8(3):304-312, June 1992.  [44] William Mendenhall, Richard L. Scheaffer, and Dennis D. Wackerly. Mathematical Statistics with Applications. Duxbury Press, Boston, Mass., 1981. [45] P. S. Moharir. Pattern-recognition Transforms. Electronic Circuits and Systems Series. Research Studies Press Ltd. (John Wiley and Sons Inc.), Somerset, England, 1992. [46] Jane Mulligan. Fast calibrated stereo vision for manipulation. In Proceedings 1996 IEEE International Conference on Robotics and Automation (ICRA '96), volume 3,  pages 2326-2331, Minneapolis, Minnesota, April 1996. [47] Jane Mulligan. Fast calibrated stereo vision for manipulation. Journal of Real-time Imaging, 1996. To Appear. [48] B.K. Natarajan. On Planning Assemblies. In Proceedings of the 4th Annual Symposium on Computational Geometry, pages 299-308, Urbana Champaign, 111., 1988. [49] Edward W. Packel and J. F. Traub. Information-based complexity. Nature, 328:2933, July 1987. [50] Michael A. Peshkin and Arthur C. Sanderson.  The motion of a pushed, sliding  workpiece. IEEE Journal of Robotics and Automation, 4(6):569-598, 1988.  [51] Michael A. Peshkin and Arthur C. Sanderson. Planning robotic manipulation strategies for workpieces that slide. IEEE Journal of Robotics and Automation, 4(5):524531, 1988. [52] Semyon Rabinovich. Measurement Errors: Theory and Practice. American Institute of Physics, New York, NY, 1993.  138  W. Richards and A. Jepson. What makes a good feature? Technical Report A l Memo No. 1356/CBIP Paper No. 72, Massachusetts Institute of Technology Artificial Intelligence Laboratory, Cambridge, Mass., April 1992. Steven Roman. Coding and Information Theory, volume 134 of Graduate Texts in Mathematics. Springer-Verlag, New York, N.Y., 1992. Ranjit K. Roy. A Primer on the Taguchi Method. Competitive Manufacturing Series. Van Nostrand Reinhold, New York,- NY, 1990. Jagdish S. Rustagi. Introduction to Statistical Methods, volume I. Rowman and Allanheld, Totowa, New Jersey, 1984. Arthur C. Sanderson. Parts entropy for robotic assembly system design. In International Conference on Robotics, pages 600-608, Atlanta, Georgia, May 13-15 1984. Lee W. Schruben and V. James Cogliano. An experimental procedure for simulation response surface model identification. Communications of the ACM, 30(8):716-730, August 1987. Larry S. Shapiro and J. Michael Brady. A modal approach to feature-based correspondence. In Proceedings of the British Machine Vision Conference, pages 78-85, Glasgow, UK, 1991. Steven B. Skaar, William H. Brockman, and R. Hanson. Camera Space Manipulation. The International Journal of Robotics Research, 6(4):20-32, 1987. Jack Snoeyink and Jorge Stolfi. Objects that cannot be taken apart with two hands. In Proceedings of the 9th ACM Symposium on Computational Geometry, 1993. Fridtjof Stein and Gerard Medioni. Structural indexing: Efficient 2-d object recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 14(12):1198-1204, December 1992. Anne Penfold Street and Deborah J. Street. Combinatorics of Experimental Design. Clarendon Press, Oxford, UK, 1987. P. H. Sydenham, N. H. Hancock, and R. Thorn. Introduction to Measurement Science and Engineering. Wiley Series in Measurement Science and Technology. John Wiley and Sons, Chichester, UK, 1989. Konstantinos A. Tarabanis, Peter K. Allen, and Roger Y. Tsai. A survey of sensor planning in computer vision. IEEE Transactions on Robotics and Automation, 11(1):86-104, February 1995. Albert Tarantola. Inverse Problem Theory: Methods for Data Fitting and Model Parameter Estimation. Elsevier, Amsterdam, 1987. 139  [67] Russell H. Taylor. Planning and execution of straight line manipulator trajectories. In Robot Motion: Planning and Control, pages 265-286. MIT Press, Cambridge, Mass., 1982. [68] J. F. Traub, G. W. Wasilkowski, and H. Wozniakowski. Information-Based Complexity. Computer Science and Scientific Computing. Academic Press Inc., San Diego, CA, 1988. [69] Anne Treisman. Preattentive Processing in Vision. Computer Vision, Graphics and Image Processing, 31:156-177, 1985.  [70] Roger Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal of Robotics and Automation, RA-3(4):323-344, August 1987. [71] Matthew A. Turk and Alex P. Pentland: Face Recognition using Eigenfaces. In Proceedings of the 1991 IEEE Computer Society Conference on Computer Vision  and Pattern Recognition CVPR91, pages 586-591, Lahaina, Maui, Hawaii, June 1991. [72] Arthur G. Werschulz. The Computational Complexity of Differential and Integral  Equations. Oxford Mathemtical Monographs. Oxford University Press, Oxford, UK, 1991. [73] Peter Whaite and Frank Ferrie. Autonomous exploration: Driven by uncertainty. Technical Report TR-CIM-93-17, McGill Research Centre for Intelligent Machines, March 1993. [74] Jeff Wiegley, Ken Goldberg, Mike Peshkin, and Mike Brokowski. A complete algorithm for designing passive fences to orient parts. In Proceedings 1996 IEEE International Conference on Robotics and Automation (ICRA'96), volume 2, pages  1133-1139, Minneapolis, Minnesota, April 1996.  140  Appendix A Treatments and Outcomes - Push Planner  A.l  Triangle  The following tables summarize effects and treatment outcomes for the triangle part under the push planner as described in section 6.3.1. A . 1.1  Success R a t e  Effects  Factor Effects Factor(s)  Effect  Factor(s)  Effect  1(0) F(l) C(2) FC ( 3) V(4) FV ( 5) CV ( 6)  0.479 -0.017 -0.091 0.014 0.011 0.028 0.058 -0.056 -0.035 -0.008 0.030 -0.159 -0.004 -0.026 0.032 -0.110 0.087 -0.031 -0.020 0.023 0.027 0.105  LN ( 90) SN ( AO) PN ( CO) A ( 100) FA ( 101) CA ( 102) VA ( 104) DA ( 108) LA ( 110) SA ( 120) PA ( 140) NA ( 180) B ( 200) FB ( 201) CB ( 202) VB ( 204) DB ( 208) LB ( 210) SB ( 220) PB ( 240) NB ( 280) AB ( 300)  0.008 -0.035 0.021 -0.023 -0.066 -0.062  D(8) FD ( 9) CD ( A) VD ( C) L(10) FL ( 11) CL ( 12) VL ( 14) DL ( 18) S ( 20) FS ( 21) CS ( 22) VS ( 24) DS ( 28) LS ( 30)  141  0.010 -0.010 -0.006 0.076 -0.070 -0.044 -0.016 -0.004 -0.007 0.052 -0.028 0.058 0.040 0.013 -0.048 0.009  Factor Effects cont'd Factor(s) P ( 40) FP ( 41) CP ( 42) VP ( 44) DP ( 48) LP ( 50) SP ( 60) N ( 80) FN ( 81) CN ( 82) VN ( 84) DN ( 88)  Effect  Factor(s)  0.011 -0.048 -0.013 0.036 -0.054 0.087 -0.080 -0.058 0.036 0.021 0.014 -0.004  K ( 400) FK ( 401) CK ( 402) VK ( 404) DK ( 408) LK ( 410) SK ( 420) PK ( 440) NK ( 480) AK ( 500) BK ( 600)  142  Effect 0.017 -0.038 0.043 0.080 0.044 -0.042 -0.138 0.001 -0.056 0.032 -0.018  Outcomes Treatment Outcomes Levels 1 fds fdp sp cdn fcsn fcpn cdspn cda fcsa fcpa cdspa na fdsna fdpna spna fcdb csb cpb fcdspb fnb dsnb dpnb fspnb fab dsab dpab fspab fcdnab csnab cpnab fcdspnab fcdk csk cpk fcdspk  Obs 0.905 0.976 0.591 1.000 0.677 0.826 0.587 0.582 0.689 0.860 0.444 0.000 0.906 0.889 0.000 1.000 0.736 0.461 0.005 0.662 0.788 1.000 0.000 0.761 0.808 1.000 0.000 0.742 0.000 0.000 0.881 0.498 0.686 0.000 0.748 0.631  Levels fcv cvds cvdp fcvsp fvdn vsn vpn fvdspn fvda vsa vpa fvdspa fcvna cvdsna cvdpna fcvspna vdb fvsb fvpb vdspb cvnb fcvdsnb fcvdpnb cvspnb cvab fcvdsab fcvdpab cvspab vdnab fvsnab fvpnab vdspnab vdk fvsk fvpk vdspk  Obs 0.716 0.480 0.535 0.729 0.499 0.000 0.723 0.693 0.667 0.000 0.000 0.309 0.630 0.543 0.000 0.467 0.752 0.000 0.563 0.851 0.000 0.000 0.822 0.840 0.721 0.041 0.498 0.713 0.000 0.957 0.000 0.832 0.810 0.719 0.559 0.845  Levels dl fls flp dlsp cln fcdlsn fcdlpn clspn cla fcdlsa fcdlpa clspa dlna flsna flpna dlspna fclb cdlsb cdlpb fclspb fdlnb lsnb lpnb fdlspnb fdlab lsab lpab fdlspab fclnab cdlsnab cdlpnab fclspnab fclk cdlsk cdlpk fclspk  143  Obs 0.026 0.000 0.567 1.000 0.073 0.727 0.613 0.010 0.002 0.353 0.000 0.008 0.000 0.952 0.318 1.000 0.071 0.524 0.000 0.758 0.524 1.000 0.848 0.000 0.000 0.999 0.976 0.887 0.126 0.639 0.000 0.024 0.006 0.650 0.801 0.565  Levels fcvdl cvls cvlp fcvdlsp fvln vdlsn vdlpn fvlspn fvla vdlsa vdlpa fvlspa fcvdlna cvlsna cvlpna fcvdlspna vlb fvdlsb fvdlpb vlspb cvdlnb fcvlsnb fcvlpnb cvdlspnb cvdlab fcvlsab fcvlpab cvdlspab vlnab fvdlsnab fvdlpnab vlspnab vlk fvdlsk fvdlpk vlspk  Obs 0.211 0.281 0.681 0.000 0.324 0.512 0.313 0.769 0.290 0.774 0.149 0.545 0.115 0.841 0.000 0.673 0.094 0.657 0.000 0.964 0.002 0.630 0.931 0.003 0.030 0.749 0.482 0.614 0.000 0.723 0.000 0.964 0.034 0.742 0.297 0.000  Treatment Outcomes (cont'd) Levels fnk dsnk dpnk fspnk fak dsak dpak fspak fcdnak csnak cpnak fcdspnak bk fdsbk fdpbk spbk cdnbk fcsnbk fcpnbk cdspnbk cdabk fcsabk fcpabk cdspabk nabk fdsnabk fdpnabk spnabk  Obs 0.848 0.000 1.000 0.000 0.800 0.995 0.952 0.971 0.593 0.831 0.774 0.000 0.916 0.711 0.000 0.000 0.906 0.000 0.759 0.000 0.735 0.850' 0.000 0.742 0.000 0.883 0.799 0.000  Levels cvnk fcvdsnk fcvdpnk cvspnk cvak fcvdsak fcvdpak cvspak vdnak fvsnak fvpnak vdspnak fcvbk cvdsbk cvdpbk fcvspbk fvdnbk vsnbk vpnbk fvdspnbk fvdabk vsabk vpabk fvdspabk fcvnabk cvdsnabk cvdpnabk fcvspnabk  Obs 0.903 0.821 0.508 0.755 0.842 0.657 0.448 0.000 0.896 0.000 0.692 0.879 0.000 0.608 0.914 0.886 0.801 0.512 0.000 0.675 0.795 0.962 0.877 0.711 0.768 0.789 0.843 0.000  Levels fdlnk lsnk lpnk fdlspnk fdlak lsak lpak fdlspak fclnak cdlsnak cdlpnak fclspnak dlbk flsbk flpbk dlspbk clnbk fcdlsnbk fcdlpnbk clspnbk clabk fcdlsabk fcdlpabk clspabk dlnabk fisnabk flpnabk dlspnabk  144  Obs 0.305 0.045 1.000 0.000 0.000 1.000 1.000 0.000 0.059 0.547 0.040 0.046 0.000 0.867 0.884 0.000 0.129 0.002 0.751 0.836 0.000 0.183 0.000 0.833 0.021 0.000 0.587 0.000  Levels Obs cvdlnk 0.022 fcvlsnk 0.862 fcvlpnk 0.386 cvdlspnk 0.001 cvdlak 0.003 fcvlsak 0.645 fcvlpak 0.655 cvdlspak 0.622 vlnak 0.000 fvdlsnak 0.168 fvdlpnak . 0.743 vlspnak 0.869 fcvdlbk 0.214 cvlsbk 0.874 cvlpbk 0.595 fcvdlspbk 0.462 fvlnbk 0.318 vdlsnbk 0.000 vdlpnbk 0.760 fvlspnbk 0.960 fvlabk 0.927 vdlsabk 0.697 vdlpabk 0.901 fvlspabk 0.000 fcvdlnabk 0.236 cvlsnabk 0.629 cvlpnabk 0.000 fcvdlspnabk 0.000  A.1.2  Planning Time  Effects Factor Effects Factor(s) 1(0) F(l) C(2) FC ( 3) V(4)  FV ( 5) CV ( 6) D(8) FD ( 9) CD ( A) VD ( C) L(10) FL ( 11) CL ( 12) VL ( 14) DL ( 18) S ( 20) FS ( 21) CS ( 22) VS ( 24) DS ( 28) LS ( 30) P ( 40) FP ( 41) CP ( 42) VP ( 44) DP ( 48) LP ( 50) SP ( 60) N ( 80) FN ( 81) CN ( 82) VN ( 84) DN ( 88)  Effect 48.208 -0.959 -2.013 -5.590 0.340 7.628 -1.673 89.091 -0.866 -1.893 0.642 58.895 1.536 -2.036 1.588 53.797 5.672 4.017 1.885 -2.382 5.177 4.110 -34.547 -1.448 1.021 -0.848 -32.228 -23.823 2.963 -13.912 0.642 4.482 -0.526 -13.429  Factor(s) LN ( 90) SN ( AO) PN ( CO) A ( 100) FA ( 101) CA ( 102) VA ( 104) DA ( 108) LA ( 110) SA ( 120) PA ( 140) NA ( 180) B ( 200) FB ( 201) CB ( 202) VB ( 204) DB ( 208) LB ( 210) SB ( 220) PB ( 240) NB ( 280) AB ( 300) K ( 400) FK ( 401) CK ( 402) VK ( 404) DK LK SK PK NK AK BK  145  ( 408) ( 410) ( 420) ( 440) ( 480) ( 500) ( 600)  Effect -11.740 1.913 10.006 -0.823 -3.316 1.686 -0.698 -0.407 -4.019 1.917 4.513 1.647 -4.302 -0.376 -3.169 -3.646 -4.486 -5.251 4.648 5.037 9.692 0.318 8.659 -0.557 0.241 -3.910 9.977 11.753 4.020 -1.873 -2.349 -3.681 4.622  Outcomes Treatment Outcomes Levels 1 fds fdp sp cdn fcsn fcpn cdspn cda fcsa fcpa cdspa na fdsna fdpna spna fcdb csb cpb fcdspb fnb dsnb dpnb fspnb fab dsab dpab fspab fcdnab csnab cpnab fcdspnab fcdk csk cpk fcdspk  Obs 1.950 21.983 3.367 1.100 24.583 1.767 0.633 45.483 75.000 2.067 0.717 76.117 2.033 74.700 23.167 1.917 20.950 1.967 0.217 10.200 1.850 79.900 14.733 0.433 1.333 46.450 46.617 0.817 25.033 0.700 0.217 17.350 21.817 0.600 1.300 60.233  Levels fcv cvds cvdp fcvsp fvdn vsn vpn fvdspn fvda vsa vpa fvdspa fcvna cvdsna cvdpna fcvspna vdb fvsb fvpb vdspb cvnb fcvdsnb fcvdpnb cvspnb cvab fcvdsab fcvdpab cvspab vdnab fvsnab fvpnab vdspnab vdk fvsk fvpk vdspk  Obs  Levels 1.900 dl 22.217 fls 7.517 flp 0.667 dlsp 63.283 cln 1.817 fcdlsn 0.683 fcdlpn 38.067 clspn 72.783 cla 2.050 fcdlsa 2.333 fcdlpa 41.850 clspa 2.017 dlna 82.917 flsna 81.067 flpna 1.850 dlspna 65.983 fclb 1.883 cdlsb 1.000 cdlpb 13.117 fclspb 1.800 fdlnb 67.000 lsnb 10.617 lpnb 1.150 fdlspnb 1.283 fdlab 46.400 lsab 9.433 lpab 1.250 fdlspab 24.883 fclnab 0.683 cdlsnab 0.150 cdlpnab 4.683 fclspnab 21.717 fclk 1.800 cdlsk 0.250 cdlpk 31.950 fclspk 146  Obs 383.033 11.417 1.683 30.867 10.867 134.783 17.917 11.417 12.433 169.983 41.700 10.033 123.433 11.933 10.000 48,350 12.133 140.717 18.067 5.183 141.683 10.717 1.417 130.983 154.000 7.733 1.900 164.567 4.283 169.133 51.050 3.483 3.700 374.333 110.617 1.950  Levels fcvdl cvls cvlp fcvdlsp fvln vdlsn vdlpn fvlspn fvla vdlsa vdlpa fvlspa fcvdlna cvlsna cvlpna fcvdlspna vlb fvdlsb fvdlpb vlspb cvdlnb fcvlsnb fcvlpnb cvdlspnb cvdlab fcvlsab fcvlpab cvdlspab vlnab fvdlsnab fvdlpnab vlspnab vlk fvdlsk fvdlpk vlspk  Obs 377.600 11.233 3.183 142.417 10.883 128.500 42.433 3.267 12.433 169.117 158.350 8.117 147.517 4.383 2.767 62.167 11.583 148.850 31.117 9.867 149.067 10.733 1.817 41.500 149.500 7.767 2.067 167.100 4.267 170.883 76.050 3.333 3.700 425.467 159.517 3.333  Treatment Outcomes (cont'd) Levels fnk dsnk dpnk fspnk fak dsak dpak fspak fcdnak csnak cpnak fcdspnak bk fdsbk fdpbk spbk cdnbk fcsnbk fcpnbk cdspnbk cdabk fcsabk fcpabk cdspabk nabk fdsnabk fdpnabk spnabk  Obs 0.600 21.400 3.917 0.600 0.650 35.883 25.800 0.950 36.500 1.017 0.167 13.650 0.633 64.717 41.217 1.817 61.717 1.800 0.683 35.417 88.367 2.100 1.250 46.317 0.700 27.767 15.050 0.667  Levels cvnk fcvdsnk fcvdpnk cvspnk cvak fcvdsak fcvdpak cvspak vdnak fvsnak fvpnak vdspnak fcvbk cvdsbk cvdpbk fcvspbk fvdnbk vsnbk vpnbk fvdspnbk fvdabk vsabk vpabk fvdspabk fcvnabk cvdsnabk cvdpnabk fcvspnabk  Obs 0.583 20.883 2.400 0.250 0.683 35.600 4.800 0.500 36.067 1.000 0.200 6.967 0.667 39.933 31.150 1.750 61.650 1.783 0.567 32.783 82.100 2.100 0.483 18.567 0.700 29.133 17.200 0.183  Levels fdlnk lsnk lpnk fdlspnk fdlak lsak lpak fdlspak fclnak cdlsnak cdlpnak fclspnak dlbk flsbk flpbk dlspbk clnbk fcdlsnbk fcdlpnbk clspnbk clabk fcdlsabk fcdlpabk clspabk dlnabk flsnabk flpnabk dlspnabk  147  Obs 128.300 3.617 0.800 50.500 271.767 6.317 1.183 144.467 6.283 218.617 158.433 3.050 247.600 7.033 5.717 146.267 10.750 201.800 162.717 10.267 13.117 268.967 26.600 3.250 139.717 4.083 1.067 128.467  Levels cvdlnk fcvlsnk fcvlpnk cvdlspnk cvdlak fcvlsak fcvlpak cvdlspak vlnak fvdlsnak fvdlpnak vlspnak fcvdlbk cvlsbk cvlpbk fcvdlspbk fvlnbk vdlsnbk vdlpnbk fvlspnbk fvlabk vdlsabk vdlpabk fvlspabk fcvdlnabk cvlsnabk cvlpnabk fcvdlspnabk  Obs 129.150 3.567 1.067 111.050 271.683 6.433 1.050 71.083 6.467 220.850 59.800 3.283 202.100 7.000 2.133 72.567 10.767 205.733 126.450 8.383 12.283 269.600 71.400 2.833 140.217 4.167 4.533 144.100  A.2  "L" Part  The following tables summarize effects and treatment outcomes for the "L" part under the push planner as described in section 6.3.2.  A.2.1  Success Rate  Effects Factor Effects Factor(s) 1(0) F(l) C(2) FC ( 3) V(4) FV ( 5) CV ( 6) D(8) FD ( 9) CD ( A) VD ( C) L(10) FL ( 11) CL ( 12) VL ( 14) DL ( 18) S ( 20) FS ( 21) CS ( 22) VS ( 24) DS ( 28) LS ( 30) P ( 40) FP ( 41)  Effect 0.441 -0.051 -0.083 0.013 -0.034 0.051 0.070 -0.126 0.019 0.015 0.020 -0.196 0.015 -0.017 -0.002 -0.088 0.065 -0.007 -0.029 0.014 0.022 0.139 -0.022 -0.036  Factor(s) LN ( 90) SN ( AO) PN ( CO) A ( 100) FA ( 101) CA ( 102) VA ( 104) DA ( 108) LA ( 110) SA ( 120) PA ( 140) NA ( 180) B ( 200) FB ( 201) CB ( 202) VB ( 204) DB ( 208) LB ( 210) SB ( 220) PB ( 240) NB ( 280) AB ( 300) K ( 400) FK ( 401)  148  Effect 0.016 0.036 -0.001 -0.030 -0.082 -0.023 0.011 0.045 -0.003 0.029 -0.047 0.021 -0.025 -0.038 -0.062 0.044 -0.045 0.008 0.059 -0.040 -0.031 0.010 -0.005 0.003  Factor Effects - cont'd Factor(s) CP ( 42) VP ( 44) DP ( 48) LP ( 50) SP ( 60) N ( 80) FN ( 81) CN ( 82) VN ( 84) DN ( 88)  Effect 0.028 -0.023 -0.049 0.088 -0.098 -0.083 -0.020 0.007 0.052 0.029  Factor(s) CK ( 402) VK ( 404) DK ( 408) LK ( 410) SK ( 420) PK ( 440) NK ( 480) AK ( 500) BK ( 600)  149  Effect 0.056 0.029 -0.006 -0.038 -0.138 0.018 -0.054 0.020 0.001  Outcomes Treatment Outcomes Levels 1 fds fdp sp cdn fcsn fcpn cdspn cda fcsa fcpa cdspa na fdsna fdpna spna fcdb csb cpb fcdspb fnb dsnb dpnb fspnb fab dsab dpab fspab fcdnab csnab cpnab fcdspnab fcdk csk cpk fcdspk  Obs 1.000 0.741 0.802 0.877 0.000 0.828 0.633 0.910 0.768 0.786 0.516 0.000 1.000 0.981 0.000 0.581 0.906. 0.889 0.010 0.001 0.373 1.000 0.487 0.787 0.529 1.000 0.000 0.516 0.000 0.000 0.941 0.000 0.891 0.000 0.948 0.323  Levels fcv cvds cvdp fcvsp fvdn vsn vpn fvdspn fvda vsa vpa fvdspa fcvna cvdsna cvdpna fcvspna vdb fvsb fvpb vdspb cvnb fcvdsnb fcvdpnb cvspnb cvab fcvdsab fcvdpab cvspab vdnab fvsnab fvpnab vdspnab vdk fvsk fvpk vdspk  Obs 0.550 0.569 0.856 0.770 0.404 0.000 0.000 0.475 0.435 0.057 0.834 0.259 0.669 0.752 0.000 0.524 0.630 0.000 0.763 0.706 0.095 0.484 0.564 0.752 0.798 0.071 0.497 0.334 0.900 0.827 0.072 0.897 0.867 0.836 0.772 0.001  Levels dl fls flp dlsp cln fcdlsn fcdlpn clspn cla fcdlsa fcdlpa clspa dlna flsna flpna dlspna fclb cdlsb cdlpb fclspb fdlnb lsnb lpnb fdlspnb fdlab lsab lpab fdlspab fclnab cdlsnab cdlpnab fclspnab fclk cdlsk cdlpk fclspk  150  Obs 0.010 0.871 0.727 1.000 0.005 0.620 0.808 0.000 0.016 0.253 0.000 0.000 0.009 0.966 0.011 1.000 0.132 0.000 0.000 0.648 0.007 1.000 1.000 0.013 0.000 1.000 0.761 0.976 0.001 0.683 0.000 0.000 0.002 0.593 0.813 0.358  Levels fcvdl cvls cvlp fcvdlsp fvln vdlsn vdlpn fvlspn fvla vdlsa vdlpa fvlspa fcvdlna cvlsna cvlpna fcvdlspna vlb fvdlsb fvdlpb vlspb cvdlnb fcvlsnb fcvlpnb cvdlspnb cvdlab fcvlsab fcvlpab cvdlspab vlnab fvdlsnab fvdlpnab vlspnab vlk fvdlsk fvdlpk vlspk  Obs 0.130 0.755 0.770 0.000 0.561 0.000 0.000 0.843 0.001 0.601 0.000 0.001 0.102 0.838 0.000 0.716 0.017 0.722 0.000 0.973 0.007 0.480 0.003 0.002 0.008 0.847 0.128 0.471 0.636 0.305 0.000 0.893 0.003 0.060 0.891 0.000  Treatment Outcomes (cont'd) Levels fnk dsnk dpnk fspnk fak dsak dpak fspak fcdnak csnak cpnak fcdspnak bk fdsbk fdpbk spbk cdnbk fcsnbk fcpnbk cdspnbk cdabk fcsabk fcpabk cdspabk nabk fdsnabk fdpnabk spnabk  Obs 0.840 0.000 0.569 0.000 0.576 1.000 0.952 0.513 0.000 0.853 0.799 0.000 1.000 0.564 0.250 1.000 0.985 0.000 0.806 0.000 0.687 0.763 0.709 0.598 0.493 0.710 0.825 0.000  Levels cvnk fcvdsnk fcvdpnk cvspnk cvak fcvdsak fcvdpak cvspak vdnak fvsnak fvpnak vdspnak fcvbk cvdsbk cvdpbk fcvspbk fvdnbk vsnbk vpnbk fvdspnbk fvdabk vsabk vpabk fvdspabk fcvnabk cvdsnabk cvdpnabk fcvspnabk  Obs 0.851 0.652 0.545 0.954 0.777 0.537 0.865 0.000 0.796 0.093 0.584 0.551 0.633 0.543 0.000 0.683 0.512 0.933 0.106 0.412 0.691 0.898  Levels fdlnk lsnk lpnk fdlspnk fdlak lsak lpak fdlspak fclnak cdlsnak cdlpnak fclspnak dlbk flsbk flpbk dlspbk clnbk fcdlsnbk fcdlpnbk clspnbk clabk fcdlsabk  0.628 fcdlpabk 0.684 clspabk 0.326 dlnabk 0.642 flsnabk 0.513 flpnabk 0.000 dlspnabk  151  Obs 0.207 0.000 1.000 0.000 0.078 1.000 0.941 0.000 0.056 0.000 0.824 0.633 0.000 0.845 0.836 0.000 0.004 0.001 0.097 0.618 0.018 0.709 0.000 0.941 0.000 0.000 0.000 0.000  Levels cvdlnk fcvlsnk fcvlpnk cvdlspnk cvdlak fcvlsak fcvlpak cvdlspak vlnak fvdlsnak fvdlpnak vlspnak fcvdlbk cvlsbk cvlpbk fcvdlspbk fvlnbk vdlsnbk vdlpnbk fvlspnbk fvlabk vdlsabk vdlpabk fvlspabk fcvdlnabk cvlsnabk cvlpnabk fcvdlspnabk  Obs 0.009 0.767 0.000 0.000 0.002 0.000 0.720 0.394 0.003 0.788 0.000 0.708 0.099 0.774 0.724 0.378 0.514 0.577 0.000 0.728 0.966 0.038 0.000 0.072 0.106 0.001 0.652 0.000  A.2.2  Planning Time  Effects Factor Effects Factor(s) 1(0) F(l) C(2) FC ( 3)  V(4) FV(5) CV ( 6) D(8) FD ( 9) CD ( A) VD ( C) L(10) FL ( 11) CL ( 12) VL ( 14) DL ( 18) S ( 20) FS ( 21) CS ( 22) VS ( 24) DS ( 28) LS ( 30) P ( 40) FP ( 41) CP ( 42) VP ( 44) DP ( 48) LP ( 50) SP ( 60) N ( 80) FN ( 81) CN ( 82) VN ( 84) DN ( 88)  Effect 124.136 -3.204 7.532 -41.711 -29.016 27.801 -26.924 235.453 -3.124 7.558 -29.072 181.750 -3.861 5.411 -30.367 172.708 10.203 19.705 -20.947 17.753 10.461 9.034 -34.984 1.439 18.040 -21.272 -32.835 -26.217 -18.693 37.426 -12.623 30.764 -6.533 37.211  Factor(s) LN ( 90) SN ( AO) PN ( CO) A ( 100) FA ( 101) CA ( 102) VA ( 104) DA ( 108) LA ( 110) SA ( 120) PA ( 140) NA ( 180) B ( 200) FB ( 201) CB ( 202) VB ( 204) DB ( 208) LB ( 210) SB ( 220) PB ( 240) NB ( 280) AB ( 300) K ( 400) FK ( 401) CK ( 402) VK ( 404) DK ( 408) LK ( 410) SK ( 420) PK ( 440) NK ( 480) AK ( 500) BK ( 600)  152  Effect 39.551 -21.571 2.929 54.368 -6.771 5.041 -11.836 51.479 41.093 7.668 14.308 21.804 -2.842 -14.306 -0.704 -16.363 -3.357 -8.891 -18.927 -1.734 16.040 24.257 24.139 -1.484 -16.319 16.176 25.390 25.659 19.593 -43.276 -9.547 -19.501 -46.321  Outcomes Treatment Outcomes Levels 1 fds fdp sp cdn fcsn fcpn cdspn cda fcsa fcpa cdspa na fdsna fdpna spna fcdb csb cpb fcdspb fnb dsnb dpnb fspnb fab dsab dpab fspab fcdnab csnab cpnab fcdspnab fcdk csk cpk fcdspk  Obs 1.117 39.517 16.533 1.017 39.767 1.100 1.583 38.233 66.017 2.300 2.050 113.083 3.117 68.083 63.533 2.533 85.050 2.383 1.700 49.600 1.100 86.183 41.117 1.483 1.467 56.517 55.583 1.433 112.650 3.133 2.933 107.850 85.333 2.383 0.717 39.000  Levels fcv cvds cvdp fcvsp fvdn vsn vpn fvdspn fvda vsa vpa fvdspa fcvna cvdsna cvdpna fcvspna vdb fvsb fvpb vdspb cvnb fcvdsnb fcvdpnb cvspnb cvab fcvdsab fcvdpab cvspab vdnab fvsnab fvpnab vdspnab vdk fvsk fvpk vdspk  Obs 1.117 39.467 27.717 0.667 39.400 1.117 2.383 40.100 54.117 2.383 2.850 101.833 3.117 63.833 81.633 2.617 . 85.983 2.367 2.383 22.150 1.100 85.267 54.900 1.983 1.467 124.617 38.117 1.333 112.583 3.117 1.967 103.883 85.867 2.383 0.450 36.400  Levels dl fls np dlsp cln fcdlsn fcdlpn clspn cla fcdlsa fcdlpa clspa dlna flsna flpna dlspna fclb cdlsb cdlpb fclspb fdlnb lsnb lpnb fdlspnb fdlab lsab lpab fdlspab fclnab cdlsnab cdlpnab fclspnab fclk cdlsk cdlpk fclspk  153  Obs 241.283 6.700 2.333 140.017 6.683 242.033 889.117 6.450 9.983 346.900 192.633 20.000 342.500 14.017 21.500 245.300 14.500 236.617 74.517 5.300 261.267 14.500 11.200 201.450 358.300 18.750 5.783 960.817 20.650 728.783 1888.733 20.000 14.517 777.817 206.300 6.183  Levels fcvdl cvls cvlp fcvdlsp fvln vdlsn vdlpn fvlspn fvla vdlsa vdlpa fvlspa fcvdlna cvlsna cvlpna fcvdlspna vlb fvdlsb fvdlpb vlspb cvdlnb fcvlsnb fcvlpnb cvdlspnb cvdlab fcvlsab fcvlpab cvdlspab vlnab fvdlsnab fvdlpnab vlspnab vlk fvdlsk fvdlpk vlspk  Obs 241.033 6.633 2.033 197.350 6.750 237.083 206.817 4.883 14.883 344.317 270.800 20.067 339.467 9.733 27.050 200.633 14.750 236.883 53.233 5.000 234.083 14.450 10.583 167.350 353.400 9.733 6.367 254.283 20.850 768.850 323.500 15.700 14.500 791.917 297.850 5.850  Treatment Outcomes (cont'd) Levels fnk dsnk dpnk fspnk fak dsak dpak fspak fcdnak csnak cpnak fcdspnak bk fdsbk fdpbk spbk cdnbk fcsnbk fcpnbk cdspnbk cdabk fcsabk fcpabk cdspabk nabk fdsnabk fdpnabk spnabk  Obs  Levels cvnk 1.300 39.483 fcvdsnk 17.400 fcvdpnk 1.333 cvspnk cvak 3.133 112.717 fcvdsak 83.250 fcvdpak cvspak 3.033 53.100 vdnak fvsnak 1.767 1.117 fvpnak 31.667 vdspnak fcvbk 1.333 41.417 cvdsbk 33.117 cvdpbk 1.983 fcvspbk 85.483 fvdnbk vsnbk 2.400 vpnbk 1.600 82.800 fvdspnbk 112.400 fvdabk 3.117 vsabk vpabk 1.867 61.467 fvdspabk 1.567 fcvnabk 57.050 cvdsnabk 49.817 cvdpnabk 1.550 fcvspnabk  Obs  Levels 1.317 fdlnk 39.500 lsnk 24.267 lpnk fdlspnk 1.217 fdlak 3.100 111.767 lsak 104.533 lpak 2.667 fdlspak 56.367 fclnak cdlsnak 1.750 cdlpnak 1.700 61.167 fclspnak dlbk 1.533 49.867 flsb'k 33.850 flpbk dlspbk 1.200 85.967 clnbk fcdlsnbk 2.383 fcdlpnbk 1.233 83.833 clspnbk 111.867 clabk 3.117 fcdlsabk 1.217 fcdlpabk 56.717 clspabk dlnabk 1.550 56.383 flsnabk 37.800 flpnabk dlspnabk 1.417  154  Obs 511.150 6.733 2.283 359.683 404.117 20.700 7.700 730.000 9.600 1095.317 619.600 5.067 788.500 7.117 3.267 157.017 14.450 510.900 181.183 9.083 20.667 363.783 237.067 5.733 719.500 10.200 6.117 301.017  Levels cvdlnk fcvlsnk fcvlpnk cvdlspnk cvdlak fcvlsak fcvlpak cvdlspak vlnak fvdlsnak fvdlpnak vlspnak fcvdlbk cvlsbk cvlpbk fcvdlspbk fvlnbk vdlsnbk vdlpnbk fvlspnbk fvlabk vdlsabk vdlpabk fvlspabk fcvdlnabk cvlsnabk cvlpnabk fcvdlspnabk  Obs 503.600 6.633 5.300 455.383 341.600 20.583 10.517 495.667 9.717 1134.300 401.283 7.717 244.917 8.767 6.967 99.800 14.650 510.883 180.517 9.350 20.750 368.183 148.450 5.450 726.533 10.183 5.400 378.667  A.3  "G" Part  The following tables summarize effects and treatment outcomes for the "G" part under the push planner as described in section 6.3.3.  A.3.1  Success Rate  Effects Factor Effects Factor(s) 1(0) F(l) C(2) FC ( 3) V(4) FV ( 5) CV ( 6) D(8) FD ( 9) CD ( A) VD ( C) L(10) FL ( 11) CL ( 12) VL ( 14) DL ( 18) S ( 20) FS ( 21) CS ( 22) VS ( 24) DS ( 28) LS ( 30) P ( 40) FP ( 41)  Effect 0.237 -0.067 -0.121 0.041 -0.275 0.058 0.124 -0.073 -0.022 -0.009 0.035 -0.049 -0.042 -0.010 0.054 -0.029 0.175 -0.019 -0.047 -0.127 -0.000 0.079 -0.013 -0.067  Factor(s) LN ( 90) SN ( AO) PN ( CO) A ( 100) FA ( 101) CA ( 102) VA ( 104) DA ( 108) LA ( 110) SA ( 120) PA ( 140) NA ( 180) B ( 200) FB ( 201) CB ( 202) VB ( 204) DB ( 208) LB ( 210) SB ( 220) PB ( 240) NB ( 280) AB ( 300) K ( 400) FK ( 401)  155  Effect 0.018 -0.032 -0.005 0.007 -0.035 0.007 -0.050 0.009 0.029 0.019 -0.022 -0.050 -0.014 0.004 -0.018 0.050 0.032 -0.011 -0.027 -0.024 -0.022 0.008 -0.023 -0.043  Factor Effects - cont'd Factor(s)  Effect  CP ( 42) VP ( 44) DP ( 48) LP ( 50) SP ( 60) N ( 80) FN ( 81) CN ( 82) VN ( 84) DN ( 88)  0.006 -0.002 -0.011 -0.007 -0.033 -0.002 -0.025 -0.011 0.004 0.035  Factor(s) CK ( 402) VK ( 404) DK ( 408) LK ( 410) SK ( 420) PK ( 440) NK ( 480) AK ( 500) BK ( 600)  156  Effect 0.045 0.033 -0.060 -0.010 -0.077 -0.027 -0.055 0.038 0.016  Outcomes Treatment Outcomes Levels 1 fds fdp sp cdn fcsn fcpn cdspn cda fcsa fcpa cdspa na fdsna fdpna spna fcdb csb cpb fcdspb fnb dsnb dpnb fspnb fab dsab dpab fspab fcdnab csnab cpnab fcdspnab fcdk csk cpk fcdspk  Obs 0.000 0.974 0.000 1.000 0.103 0.635 0.644 0.455 0.027 0.730 0.200 0.559 0.001 0.958 0.000 1.000 0.246 0.000 0.000 0.359 0.702 1.000 0.932 0.989 0.203 0.971 0.692 0.467 0.000 0.000 0.233 0.179 0.000 0.000 0.686 0.246  Levels fcv cvds cvdp fcvsp fvdn vsn vpn fvdspn fvda vsa vpa fvdspa fcvna cvdsna cvdpna fcvspna vdb fvsb fvpb vdspb cvnb fcvdsnb fcvdpnb cvspnb cvab fcvdsab fcvdpab cvspab vdnab fvsnab fvpnab vdspnab vdk fvsk fvpk vdspk  Obs 0.167 0.062 0.047 0.278 0.131 0.000 0.201 0.075 0.030 0.074 0.000 0.025 0.007 0.055 0.000 0.085 0.181 0.000 0.259 0.109 0.000 0.228 0.263 0.122 0.242 0.034  Levels dl fls  np dlsp cln fcdlsn fcdlpn clspn cla fcdlsa fcdlpa clspa dlna flsna flpna dlspna fclb cdlsb cdlpb fclspb fdlnb lsnb lpnb fdlspnb fdlab lsab 0.000 lpab 0.076 fdlspab 0.020 fclnab 0.092 cdlsnab 0.016 cdlpnab 0.000 fclspnab 0.079 fclk 0.088 cdlsk 0.113 cdlpk 0.196 fclspk 157  Obs 0.063 0.950 0.000 1.000 0.000 0.418 0.193 0.783 0.069 0.272 0.000 0.404 0.491 0.988 0.260 1.000 0.048 0.001 0.000 0.182 0.001 1.000 0.000 0.445 0.003 0.667 0.447 0.943 0.173 0.703 0.000 0.004 0.101 0.432 0.010 0.352  Levels fcvdl cvls cvlp fcvdlsp fvln vdlsn vdlpn fvlspn fvla vdlsa vdlpa fvlspa fcvdlna cvlsna cvlpna fcvdlspna vlb fvdlsb fvdlpb vlspb cvdlnb fcvlsnb fcvlpnb cvdlspnb cvdlab fcvlsab fcvlpab cvdlspab vlnab fvdlsnab fvdlpnab vlspnab vlk fvdlsk fvdlpk vlspk  Obs 0.024 0.173 0.000 0.049 0.091 0.260 0.109 0.203 0.036 0.033 0.028 0.045 0.015 0.345 0.026 0.000 0.045 0.147 0.085 0.125 0.020 0.266 0.004 0.043 0.000 0.096 0.000 0.219 0.003 0.164 0.048 0.426 0.012 0.046 0.030 0.122  Treatment Outcomes (cont'd) Levels fnk dsnk dpnk fspnk fak dsak dpak fspak fcdnak csnak cpnak fcdspnak bk fdsbk fdpbk spbk cdnbk fcsnbk fcpnbk cdspnbk cdabk fcsabk fcpabk cdspabk nabk fdsnabk fdpnabk spnabk  Obs 0.501 0.000 1.000 0.174 0.460 1.000 0.000 0.988 0.472 0.753 0.499 0.000 0.594 0.471 0.000 1.000 0.365 0.094 0.000 0.000 0.332 0.523 0.594 0.421 1.000 0.806 0.000 0.000  Levels cvnk fcvdsnk fcvdpnk cvspnk cvak fcvdsak fcvdpak cvspak vdnak fvsnak fvpnak vdspnak fcvbk cvdsbk cvdpbk fcvspbk fvdnbk vsnbk vpnbk fvdspnbk fvdabk vsabk vpabk fvdspabk fcvnabk cvdsnabk cvdpnabk fcvspnabk  Obs 0.111 0.044 0.184 0.199 0.152 0.064 0.038 0.108 0.000 0.007 0.000 0.118 0.394 0.219 0.000 0.246 0.178 0.004 0.025 0.048 0.030 0.153 0.095 0.043 0.039 0.071 0.161 0.096  Levels fdlnk lsnk lpnk fdlspnk fdlak lsak lpak fdlspak fclnak cdlsnak cdlpnak fclspnak dlbk flsbk flpbk dlspbk clnbk fcdlsnbk fcdlpnbk clspnbk clabk fcdlsabk fcdlpabk clspabk dlnabk flsnabk flpnabk dlspnabk  158  Obs 0.083 1.000 0.450 0.000 0.025 1.000 0.986 0.073 0.000 0.259 0.000 0.269 0.000 0.865 0.000 0.000 0.075 0.002 0.002 0.509 0.159 0.517 0.000 0.825 0.000 0.000 0.177  Levels cvdlnk fcvlsnk fcvlpnk cvdlspnk cvdlak fcvlsak fcvlpak cvdlspak vlnak fvdlsnak fvdlpnak vlspnak fcvdlbk cvlsbk cvlpbk fcvdlspbk fvlnbk vdlsnbk vdlpnbk fvlspnbk fvlabk vdlsabk vdlpabk fvlspabk fcvdlnabk cvlsnabk cvlpnabk  0.999  fcvdlspnabk  Obs 0.023 0.199 0.012 0.039 0.000 0.083 0.000 0.131 0.003 0.051 0.144 0.103 0.022 0.425 0.000 0.043 0.030 0.250 0.295 0.275 0.521 0.089 0.000 0.020 0.000 0.386 0.000 0.000  A.3.2  Planning Time  Effects Factor Effects Factor(s) 1(0) F(l) C(2) FC ( 3) V(4) FV ( 5) CV ( 6) D(8) FD ( 9) CD ( A) VD ( C) L(10) FL ( 11) CL ( 12) VL ( 14) DL ( 18) S ( 20) FS ( 21) CS ( 22) VS ( 24) DS ( 28) LS ( 30) P ( 40) FP ( 41) CP ( 42) VP ( 44) DP ( 48) LP ( 50) SP ( 60) N ( 80) FN ( 81) CN ( 82) VN ( 84) DN ( 88)  Effect 56.532 0.344 4.083 16.140 5.709 -3.364 7.787 104.607 0.814 4.361 5.678 67.405 0.758 6.021 5.305 62.665 20.716 2.609 1.355 7.966 20.217 17.912 -30.149 -1.372 4.232 2.702 -28.435 -16.376 13.511 2.573 8.088 6.588 2.112 1.932  Factor(s) LN ( 90) SN ( AO) PN ( CO) A ( 100) FA ( ) CA ( 102) VA ( 104) DA ( I ) LA ( n ° ) SA ( 120) PA ( ° ) NA ( 18°) B ( 200) FB ( 201) CB ( 202) VB ( 204) DB ( 208) LB ( 210) SB ( 220) PB ( 240) NB ( 280) AB ( 300) K ( 400) FK ( 401) CK ( 402) VK ( 404) DK ( 408) LK ( 410) SK ( 420) PK ( 440) NK ( 480) AK ( 500) BK ( 600) 1 0 1  08  1 4  159  Effect -3.222 6.880 2.824 30.577 1.109 4.509 7.603 29.184 20.748 4.779 -6.182 3.232 6.613 -4.840 -2.554 -3.271 5.540 4.045 -1.998 -7.931 -0.734 -6.356 -18.921 -1.607 -3.906 -8.330 -17.901 -15.671 -10.915 -0.505 2.584 -15.874 2.780  Outcomes Treatment Outcomes Levels 1 fds fdp sp cdn fcsn fcpn cdspn cda fcsa fcpa cdspa na fdsna fdpna spna fcdb csb cpb fcdspb fnb dsnb dpnb fspnb fab dsab dpab fspab fcdnab csnab cpnab fcdspnab fcdk csk cpk fcdspk  Obs 1.500 31.767 31.000 0.517 24.200 1.033 0.433 7.817 37.817 1.250 1.750 58.133 3.800 47.150 14.583 0.900 16.433 1.083 1.117 39.867 2.967 92.167 15.700 2.967 2.700 83.350 41.300 1.967 123.133 3.800 2.450 47.317 90.150 2.767 0.967 22.500  Levels fcv cvds cvdp fcvsp fvdn vsn vpn fvdspn fvda vsa vpa fvdspa fcvna cvdsna cvdpna fcvspna vdb fvsb fvpb. vdspb cvnb fcvdsnb fcvdpnb cvspnb cvab fcvdsab fcvdpab cvspab vdnab fvsnab fvpnab vdspnab vdk fvsk fvpk vdspk  Obs  Levels  3.033 17.317 5.700 0.950 34.533 1.100 0.600 17.483 40.250 1.333 3.867 56.567 3.867 46.583 50.233 0.883 30.100 1.117 2.333 51.483 3.333 90.983 13.667 1.567 3.750 92.633 36.950 2.350 79.233 3.883 1.767 123.667 86.533 1.917 0.900 27.550  dl fls flp dlsp cln fcdlsn fcdlpn clspn cla fcdlsa fcdlpa clspa dlna flsna flpna dlspna fclb cdlsb cdlpb fclspb fdlnb lsnb lpnb fdlspnb fdlab lsab lpab" fdlspab fclnab cdlsnab cdlpnab fclspnab fclk cdlsk cdlpk fclspk  160  Obs  Levels  165.883 fcvdl cvls 3.933 cvlp 1.267 130.500 fcvdlsp fvln 5.167 155.500 vdlsn 94.683 vdlpn fvlspn 4.717 fvla 4.150 441.350 vdlsa 61.350 vdlpa 11.567 fvlspa 246.800 fcvdlna cvlsna 4.933 cvlpna 3.117 204.250 fcvdlspna vlb 5.717 96.233 fvdlsb 164.750 fvdlpb vlspb 5.817 99.633 cvdlnb 12.933 fcvlsnb fcvlpnb 7.233 113.250 cvdlspnb 315.400 cvdlab 18.867 fcvlsab 13.650 fcvlpab 178.583 cvdlspab 11.117 vlnab 373.300 fvdlsnab 68.033 fvdlpnab vlspnab 8.600 vlk 8.800 90.700 fvdlsk 25.267 fvdlpk vlspk 6.500  Obs 160.450 2.383 0.900 77.917 4.550 151.667 19.917 3.100 4.633 440.450 177.633 11.883 246.800 7.050 5.467 698.933 5.250 295.083 45.483 6.517 87.833 13.917 6.400 242.633 467.850 11.817 3.167 192.167 13.433 370.850 59.483 14.050 9.150 81.817 25.867 5.350  Treatment Outcomes (cont'd) Levels fnk dsnk dpnk fspnk fak dsak dpak fspak fcdnak csnak cpnak fcdspnak bk fdsbk fdpbk spbk cdnbk fcsnbk fcpnbk cdspnbk cdabk fcsabk fcpabk cdspabk nabk fdsnabk fdpnabk spnabk  Obs 1.350 63.183 8.583 0.483 1.367 39.933 5.733 1.167 127.050 3.783 1.633 33.400 1.017 33.717 12.650 1.783 30.600 1.100 2.167 16.617 46.750 1.417 1.367 24.633 1.150 42.700 81.783 2.700  Levels cvnk fcvdsnk fcvdpnk cvspnk cvak fcvdsak fcvdpak cvspak vdnak fvsnak fvpnak vdspnak fcvbk cvdsbk cvdpbk fcvspbk fvdnbk vsnbk vpnbk fvdspnbk fvdabk vsabk vpabk fvdspabk fcvnabk cvdsnabk cvdpnabk fcvspnabk  Obs 2.083 60.583 7.817 1.000 1.367 40.100 8.950 1.200 125.550 3.750 3.167 54.450 1.350 59.500 9.083 2.200 33.450 1.100 0.950 15.033 27.483 1.350 0.933 16.050 0.900 39.583 12.417 2.600  Levels fdlnk lsnk lpnk fdlspnk fdlak lsak lpak fdlspak fclnak cdlsnak cdlpnak fclspnak dlbk flsbk flpbk dlspbk clnbk fcdlsnbk fcdlpnbk clspnbk clabk fcdlsabk fcdlpabk clspabk dlnabk flsnabk flpnabk dlspnabk  161  Obs 146.533 9.467 2.733 56.967 142.067 6.133 1.250 182.050 19.250 171.950 40.133 2.383 196.250 5.750 1.867 176.083 5.233 262.617 76.250 1.733 6.150 222.850 86.233 4.183 263.483 6.267 2.700 160.433  Levels cvdlnk fcvlsnk fcvlpnk cvdlspnk cvdlak fcvlsak fcvlpak cvdlspak vlnak fvdlsnak fvdlpnak vlspnak fcvdlbk cvlsbk cvlpbk fcvdlspbk fvlnbk vdlsnbk vdlpnbk fvlspnbk fvlabk vdlsabk vdlpabk fvlspabk fcvdlnabk cvlsnabk cvlpnabk fcvdlspnabk  Obs 83.483 5.550 1.350 125.133 169.383 4.000 2.933 217.250 11.500 163.917 56.750 5.417 165.233 9.617 1.717 92.717 8.267 228.833 20.167 4.383 6.967 224.400 90.083 4.567 277.033 6.217 7.633 161.133  Appendix B Treatments and Outcomes - Vision Orienter  B. 1 Triangle The following tables summarize effects and treatment outcomes for the triangle part under the vision orienter as described in section 6.4.1.  B.l.l  Success Rate  Effects Factor Effects Factor(s) 1(0) M(l) D(2) MD ( 3) S(4) MS ( 5) DS ( 6) C(8) MC ( 9) DC ( A) SC ( C) A (10) MA ( 11) DA ( 12) SA ( 14) CA ( 18) L ( 20) ML ( 21) DL ( 22) SL ( 24) CL ( 28) AL ( 30) Z ( 40)  Effect 0.084 0.000 -0.005 -0.002 -0.000 -0.007 0.004 0.001 0.001 -0.000 -0.001 0.005 -0.002 -0.003 0.002 0.011 -0.013 0.018 0.000 0.007 0.004 -0.002 -0.007  Factor(s) MZ ( 41) DZ ( 42) SZ ( 44) CZ ( 48) AZ ( 50) LZ ( 60) P ( 80) MP ( 81) DP ( 82) SP ( 84) CP ( 88) AP ( 90) LP ( AO) ZP ( CO) Q ( 100) MQ ( 101) DQ ( 102) SQ ( 104) CQ ( 108) AQ ( 110) LQ ( 120) ZQ ( 140) PQ ( 180) 162  Effect -0.002 -0.004 -0.005 -0.003 -0.006 0.009 0.015 -0.000 -0.005 0.005 -0.004 0.004 -0.000 -0.004 -0.008 0.005 -0.004 0.009 0.002 -0.004 0.006 -0.003 -0.008  Outcomes Treatment Outcomes Levels 1 ca cl al mcz maz mlz mcalz mcp map mlp mcalp zp cazp clzp alzp mcq maq mlq mcalq zq cazq clzq alzq pq capq clpq alpq mczpq mazpq mlzpq mcalzpq  Obs 0.100 0.133 0.050 0.033 0.083 0.092 0.100 0.083 0.083 0.125 0.067 0.075 0.150 0.167 0.092 0.100 0.092 0.075 0.075 0.133 0.083 0.083 0.042 0.042 0.100 0.050 0.075 0.058 0.092 0.083 0.108 0.117  Levels md mdca mdcl mdal dcz daz dlz dcalz dcp dap dip dcalp mdzp mdcazp mdclzp mdalzp dcq daq dlq dcalq mdzq mdcazq mdclzq mdalzq mdpq mdcapq mdclpq mdalpq dczpq dazpq dlzpq dcalzpq  Obs 0.108 0.100 0.083 0.067 0.067 0.092 0.067 0.083 0.142 0.142 0.075 0.100 0.083 0.075 0.067 0.083 0.083 0.075 0.050 0.033 0.092 0.092 0.067 0.067 0.075 0.100 0.100 0.075 0.042 0.058 0.058 0.042  Levels ms msca mscl msal scz saz slz scalz scp sap sip scalp mszp mscazp msclzp msalzp scq saq slq scalq mszq mscazq msclzq msalzq mspq mscapq msclpq msalpq sczpq sazpq slzpq scalzpq  163  Obs 0.058 0.092 0.058 0.067 0.042 0.075 0.067 0.075 0.092 0.133 0.083 0.092 0.100 0.108 0.108 0.067 0.108 0.117 0.058 0.100 0.067 0.042  Levels ds dsca dscl dsal mdscz mdsaz mdslz mdscalz mdscp mdsap mdslp mdscalp dszp dscazp dsclzp dsalzp mdscq mdsaq mdslq mdscalq dszq dscazq 0.083 dsclzq 0.033 dsalzq 0.083 dspq 0.092 dscapq 0.083 dsclpq 0.183 dsalpq 0.092 mdsczpq 0.100 mdsazpq 0.067 mdslzpq 0.083 mdscalzpq  Obs 0.100 0.092 0.050 0.050 0.042 0.042 0.083 0.117 0.092 0.125 0.117 0.100 0.100 0.100 0.067 0.083 0.075 0.050 0.067 0.067 0.125 0.075 0.100 0.083 0.092 0.183 0.058 0.058 0.050 0.050 0.100 0.075  B.l.2  Extraction Time  Effects Factor Effects Factor(s) 1(0) M(l) D(2) MD ( 3) S(4) MS ( 5) DS ( 6) C(8) MC ( 9) DC ( A) SC ( C) A (10) MA ( 11) DA ( 12) SA ( 14) CA ( 18) L ( 20) ML ( 21) DL ( 22) SL ( 24) CL ( 28) AL ( 30) Z ( 40)  Effect 25.809 9.024 0.177 0.111 -0.248 -0.336 0.190 -0.158 -0.048 -0.067 -0.153 -0.574 -0.681 0.031 -0.221 -0.034 1.276 -0.095 -0.054 0.408 -0.241 0.346 0.567  Factor(s) MZ ( 41) DZ ( 42) SZ ( 44) CZ ( 48) AZ ( 50) LZ ( 60) P ( 80) MP ( 81) DP ( 82) SP ( 84) CP ( 88) AP ( 90) LP ( AO) ZP ( CO) Q ( 100) MQ ( 101) DQ ( 102) SQ ( 104) CQ ( 108) AQ ( 110) LQ ( 120) ZQ ( 140) PQ ( 180)  164  Effect 0.507 0.040 0.006 0.008 0.756 0.033 3.120 0.098 0.017 0.119 0.000 -0.209 0.167 -0.192 3.103 0.460 0.113 0.018 -0.066 0.052 0.362 0.388 1.506  Outcomes Treatment Outcomes Levels 1 ca cl al mcz maz mlz mcalz mcp map mlp mcalp zp cazp clzp alzp mcq maq mlq mcalq zq cazq clzq alzq pq capq clpq alpq mczpq mazpq mlzpq mcalzpq  Obs 17.737 19.543 20.236 20.453 29.596 28.811 27.526 28.653 30.732 26.994 30.410 28.813 19.393 19.812 20.855 21.198 30.140 27.545 30.991 28.639 18.828 20.176 20.454 21.801 24.261 23.240 26.355 25.565 35.602 34.908 34.278 36.521  Levels md mdca mdcl mdal dcz daz dlz dcalz dcp dap dip dcalp mdzp mdcazp mdclzp mdalzp dcq daq dlq dcalq mdzq mdcazq mdclzq mdalzq mdpq mdcapq mdclpq mdalpq dczpq dazpq dlzpq dcalzpq  Obs  Levels 29.060 ms 25.702 msca 28.356 mscl 26.569 msal 18.650 scz 19.481 saz 19.696 slz 19.606 scalz 20.247 scp 20.033 sap 22.039 sip 21.689 scalp 30.390 mszp 29.400 mscazp 29.019 msclzp 29.782 msalzp 19.371 scq 18.683 saq 20.843 slq 20.073 scalq 30.950 mszq 31.136 mscazq 29.848 msclzq 32.691 msalzq 35.449 mspq 33.240 mscapq 34.561 msclpq 33.309 msalpq 23.102 sczpq 23.745 sazpq 26.521 slzpq 26.400 scalzpq  165  Obs  Levels  27.009 ds 24.162 dsca 28.401 dscl 27.237 dsal 18.910 mdscz 18.078 mdsaz 19.809 mdslz 19.419 mdscalz 20.101 mdscp 20.936 mdsap 22.122 mdslp 20.416 mdscalp 29.678 dszp 28.077 dscazp 29.361 dsclzp 30.728 dsalzp 20.040 mdscq 20.088 mdsaq 20.295 mdslq 20.077 mdscalq 28.929 dszq 27.539 dscazq 30.640 dsclzq 31.658 dsalzq 33.626 dspq 28.753 dscapq 36.023 dsclpq 33.688 dsalpq 24.494 mdsczpq 24.045 mdsazpq 25.896 mdslzpq 25.817 mdscalzpq  Obs 18.506 19.537 19.022 19.573 27.265 26.669 28.752 28.374 30.583 26.502 31.911 28.739 19.703 20.313 21.161 20.925 29.469 26.873 30.764 28.834 19.935 19.797 21.767 22.662 24.569 24.205 24.857 25.839 33.865 32.722 37.169 36.922  B.2  "L" Part  The following tables summarize effects and treatment outcomes for the "L" part under the vision orienter as described in section 6.4.2.  B.2.1  Success Rate  Effects Factor Effects Factor(s) 1(0) M(l) D(2) MD ( 3) S(4) MS ( 5) DS ( 6) C(8) MC ( 9) DC ( A) SC ( C) A (10) MA ( 11) DA ( 12) SA ( 14) CA ( 18) L ( 20) ML ( 21) DL ( 22) SL ( 24) CL ( 28) AL ( 30) Z ( 40)  Effect 0.282 -0.093 0.018 -0.009 0.128 0.019 0.006 0.004 -0.003 -0.007 -0.004 0.008 -0.008 -0.005 -0.003 0.009 0.011 -0.012 0.004 0.038 0.001 -0.004  Factor(s) MZ ( 41) DZ ( 42) SZ ( 44) CZ ( 48) AZ ( 50) LZ ( 60) P ( 80) MP ( 81) DP ( 82) SP ( 84) CP ( 88) AP ( 90) LP ( AO) ZP ( CO)  -0.013  PQ ( 180)  Q ( 100) MQ ( 101) DQ ( 102) SQ ( 104) CQ ( 108) AQ ( 110) LQ ( 120) ZQ ( 140)  166  Effect 0.008 -0.002 0.001 -0.002 -0.010 0.006 0.078 -0.024 -0.012 0.011 -0.004 -0.003 -0.016 -0.006 0.003 -0.025 -0.003 0.011 0.000 0.004 0.021 0.005 -0.022  Outcomes Treatment Outcomes Levels 1 ca cl al mcz maz mlz mcalz mcp map mlp mcalp zp cazp clzp alzp mcq maq mlq mcalq zq cazq clzq alzq pq capq clpq alpq mczpq mazpq mlzpq mcalzpq  Obs 0.175 0.192 0.200 0.158 0.150 0.183 0.150 0.133 0.275 0.225 0.142 0.192 0.375 0.433 0.250 0.300 0.125 0.192 0.100 0.133 0.175 0.175 0.250 0.242 0.300 0.417 0.300 0.308 0.133 0.133 0.133 0.125  Levels md mdca mdcl mdal dcz daz dlz dcalz dcp dap dip dcalp mdzp mdcazp mdclzp mdalzp dcq daq dlq dcalq mdzq mdcazq mdclzq mdalzq mdpq mdcapq mdclpq mdalpq dczpq dazpq dlzpq dcalzpq  Obs 0.192 0.208 0.133 0.125 0.200 0.158 0.192 0.208 0.383 0.408 0.333 0.292 0.200 0.200 0.175 0.158 0.225 0.242 0.317 0.292 0.142 0.150 0.192 0.175 0.167 0.183 0.117 0.133 0.333 0.350 0.258 0.317  Levels ms msca mscl msal scz saz slz scalz scp sap sip scalp mszp mscazp msclzp msalzp scq saq slq scalq mszq mscazq msclzq msalzq mspq mscapq msclpq msalpq sczpq sazpq slzpq scalzpq  167  Obs 0.233 0.283 0.217 0.267 0.258 0.225 0.292 0.292 0.375 0.375 0.442 0.442 0.350 0.333 0.400 0.308 0.292 0.350 0.308 0.475 0.250 0.242 0.283 0.283 0.358 0.317 0.383 0.350 0.342 0.367 0.500 0.483  Levels ds dsca dscl dsal mdscz mdsaz mdslz mdscalz mdscp mdsap mdslp mdscalp dszp dscazp dsclzp dsalzp mdscq mdsaq mdslq mdscalq dszq dscazq dsclzq dsalzq dspq dscapq dsclpq dsalpq mdsczpq mdsazpq mdslzpq mdscalzpq  Obs 0.317 0.342 0.358 0.417 0.275 0.267 0.292 0.275 0.342 0.400 0.408 0.400 0.408 0.392 0.417 0.442 0.225 0.183 0.300 0.308 0.342 0.367 0.425 0.408 0.433 0.417 0.467 0.483 0.300 0.333 0.392 0.333  B.2.2  Extraction Time  Effects Factor Effects Factor(s) 1(0) M(l) D(2) MD ( 3) S(4) MS ( 5) DS ( 6) C(8) MC ( 9) DC ( A) SC ( C) A (10) MA ( 11) DA ( 12) SA ( 14) CA ( 18) L ( 20) ML ( 21) DL ( 22) SL ( 24) CL ( 28) AL ( 30) Z ( 40)  Effect 22.786 6.369 0.322 0.165 1.434 0.102 -0.100 0.011 0.055 -0.019 -0.270 0.064 -0.463 0.051 -0.100 0.150 -0.512 0.319 -0.085 -1.548 0.134 -0.286 0.769  Factor(s) MZ ( 41) DZ ( 42) SZ ( 44) CZ ( 48) AZ ( 50) LZ ( 60) P ( 80) MP ( 81) DP ( 82) SP ( 84) CP ( 88) AP ( 90) LP ( AO) ZP ( CO) Q ( 100) MQ ( 101) DQ ( 102) SQ ( 104) CQ ( 108) AQ ( 110) LQ ( 120) ZQ ( 140) PQ ( 180)  168  Effect 0.524 -0.074 0.058 0.028 0.089 0.014 2.221 1.014 -0.012 0.177 0.174 -0.157 -0.173 -0.417 2.240 -0.272 -0.055 -0.063 -0.309 0.161 -0.119 0.109 -0.345  Outcomes Treatment Outcomes Levels 1 ca cl al mcz maz mlz mcalz mcp map mlp mcalp zp cazp clzp alzp mcq maq mlq mcalq zq cazq clzq alzq pq capq clpq alpq mczpq mazpq mlzpq mcalzpq  Obs 14.717 16.960 17.630 17.360 21.938 21.143 23.131 24.158 25.077 24.194 26.227 25.342 17.969 17.546 17.877 18.592 22.141 23.478 25.593 24.423 19.149 19.713 20.121 20.525 19.685 20.778 21.314 21.010 27.438 27.264 28.067 28.881  Levels md mdca mdcl mdal dcz daz dlz dcalz dcp dap dip dcalp mdzp mdcazp mdclzp mdalzp dcq daq dlq dcalq mdzq mdcazq mdclzq mdalzq mdpq mdcapq mdclpq mdalpq dczpq dazpq dlzpq dcalzpq  Obs 22.300 22.427 22.804 20.313 17.891 17.708 17.949 17.914 18.190 17.441 18.313 18.464 24.859 27.097 28.424 27.382 17.201 20.373 20.661 20.907 26.015 25.691 25.576 25.577 25.682 27.185 28.955 27.684 19.853 20.515 20.833 20.771  Levels ms msca mscl msal scz saz slz scalz scp sap sip scalp mszp mscazp msclzp msalzp scq saq slq scalq mszq mscazq msclzq msalzq mspq mscapq msclpq msalpq sczpq sazpq slzpq scalzpq  169  Obs  Levels 23.715 ds 23.417 dsca 22.484 dscl 21.917 dsal 18.634 mdscz 19.853 mdsaz 17.159 mdslz 18.324 mdscalz 21.692 mdscp 22.144 mdsap 18.337 mdslp 18.609 mdscalp 28.040 dszp 28.286 dscazp 28.077 dsclzp 26.724 dsalzp 20.477 mdscq 21.383 mdsaq 18.991 mdslq 19.520 mdscalq 31.471 dszq 27.258 dscazq 26.399 dsclzq 25.082 dsalzq 28.962 dspq 28.259 dscapq 27.615 dsclpq 27.054 dsalpq 23.558 mdsczpq 24.487 mdsazpq 20.107 mdslzpq 20.519 mdscalzpq  Obs 19.010 19.311 16.261 16.918 23.990 24.022 24.898 25.994 29.978 28.463 28.301 24.427 20.687 20.895 18.299 18.397 25.240 27.528 24.989 24.179 22.213 22.671 19.338 21.091 23.624 24.223 21.077 20.752 30.348 31.223 28.698 28.590  B.3  "G" Part  The following tables summarize effects and treatment outcomes for the "G" part under the vision orienter as described in section 6.4.3.  B.3.1  Success Rate  Effects Factor Effects Factor(s) 1(0) M(l) D(2) MD ( 3) S(4) MS ( 5) DS ( 6) C(8) MC ( 9) DC ( A) SC ( C) A (10) MA ( 11) DA ( 12) SA ( 14) CA ( 18) L ( 20) ML ( 21) DL ( 22) SL ( 24) CL ( 28) AL ( 30) Z ( 40)  Effect 0.223 -0.442 0.018 -0.018 -0.042 0.039 -0.008 -0.006 0.005 -0.001 -0.008 0.013 -0.012 0.007 0.002 -0.003 0.050 -0.051 -0.005 -0.010 -0.008 -0.013 0.007  Factor(s) MZ ( 41) DZ ( 42) SZ ( 44) CZ ( 48) AZ ( 50) LZ ( 60) P ( 80) MP ( 81) DP ( 82) SP ( 84) CP ( 88) AP ( 90) LP ( AO) ZP ( CO) Q ( 100) MQ ( 101) DQ ( 102)  Effect -0.008 0.008 -0.011 -0.009 0.002 -0.001 0.114 -0.112 0.005 -0.029 -0.005 0.003 0.007 0.001 -0.095 0.093 0.004  SQ CQ AQ LQ ZQ PQ  0.020 -0.002  170  ( 104) ( 108) ( 110) ( 120) ( 140) ( 180)  0.001 -0.014 -0.001 0.150  Outcomes Treatment Outcomes Levels 1 ca cl al mcz maz mlz mcalz mcp map mlp mcalp zp cazp clzp alzp mcq maq mlq mcalq zq cazq clzq alzq pq capq clpq alpq mczpq mazpq mlzpq mcalzpq  Obs 0.450 0.500 0.700 0.667 0.000 0.011 0.000 0.000 0.008 0.000 0.008 0.008 0.500 0.525 0.650 0.625 0.000 0.000 0.000 0.000 0.042 0.067 0.058 0.075 0.458 0.575 0.725 0.717 0.000 0.011 0.000 0.000  Levels md mdca mdcl mdal dcz daz dlz dcalz dcp dap dip dcalp mdzp mdcazp mdclzp mdalzp dcq daq dlq dcalq mdzq mdcazq mdclzq mdalzq mdpq mdcapq mdclpq mdalpq dczpq dazpq dlzpq dcalzpq  Obs Levels 0.008 ms 0.008 msca 0.000 mscl 0.000 msal 0.533 scz 0.608 saz 0.717 slz 0.750 scalz 0.508 scp 0.550 sap 0.667 sip 0.625 scalp 0.000 mszp 0.000 mscazp 0.009 msclzp 0.000 msalzp 0.108 scq 0.050 saq 0.092 slq 0.075 scalq 0.000 mszq 0.000 mscazq 0.000 msclzq 0.000 msalzq 0.008 mspq 0.008 mscapq 0.000 msclpq 0.000 msalpq 0.592 sczpq 0.708 sazpq 0.792 slzpq 0.800 scalzpq  171  Obs 0.000 0.000 0.000 0.000 0.408 0.458 0.675 0.583 0.375 0.408 0.433 0.425 0.000 0.000 0.000 0.000 0.117 0.108 0.092 0.083 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.417 0.525 0.642 0.550  Levels ds dsca dscl dsal mdscz mdsaz mdslz mdscalz mdscp mdsap mdslp mdscalp dszp dscazp dsclzp dsalzp mdscq mdsaq mdslq mdscalq dszq dscazq dsclzq dsalzq dspq dscapq dsclpq dsalpq mdsczpq mdsazpq mdslzpq mdscalzpq  Obs 0.400 0.525 0.592 0.642 0.000 0.000 0.000 0.000 0.000 0.008 0.008 0.000 0.392 0.442 0.417 0.475 0.000 0.000 0.000 0.000 0.092 0.125 0.075 0.125 0.500 0.525 0.592 0.692 0.000 0.000 0.000 0.000  B.3.2  Extraction Time  Effects Factor Effects Factor(s) 1(0) M(l) D(2) MD ( 3) S(4)  MS ( 5) DS ( 6) C(8)  MC ( 9) DC ( A) SC ( C) A (10) MA ( 11) DA ( 12) SA ( 14) CA ( 18) L ( 20) ML ( 21) DL ( 22) SL ( 24) CL ( 28) AL ( 30) Z ( 40)  Effect 22.052 11.955 -0.335 0.448 -0.250 0.186 1.874 -0.656 -0.153 1.068 1.367 0.343 1.035 -0.617 -0.980 0.186 -3.474 -2.553 0.185 0.150 1.309 -0.597 -2.678  Factor(s) MZ ( 41) DZ ( 42) SZ ( 44) CZ ( 48) AZ ( 50) LZ ( 60) P ( 80) MP ( 81) DP ( 82) SP ( 84) CP ( 88) AP ( 90) LP ( AO) ZP ( CO) Q ( 100) MQ ( 101) DQ ( 102) SQ ( 104) CQ ( 108) AQ ( 110) LQ ( 120) ZQ ( 140) PQ ( 180)  172  Effect -0.279 -0.385 -0.888 0.581 -0.397 -1.115 1.113 0.164 0.370 0.638 0.295 0.385 1.301 2.896 -2.749 -1.037 0.024 0.294 0.480 -0.424 0.720 -3.737 -7.482  Outcomes Treatment Outcomes Levels 1 ca cl al mcz maz mlz mcalz mcp map mlp mcalp zp cazp clzp alzp mcq maq mlq mcalq zq cazq clzq alzq  Obs  pq capq clpq alpq mczpq mazpq mlzpq  22.360 12.572 12.565 12.490 23.447 52.660 19.470 19.033 24.329 41.315 33.710 33.644 21.164 21.000 21.204 21.004 42.196 43.156 34.431 34.238 21.091 12.330 12.224 11.970 12.178 12.456 12.414 12.598 23.445 23.260 19.105  mcalzpq  25.936  Levels md mdca mdcl mdal dcz daz dlz dcalz dcp dap dip dcalp mdzp mdcazp mdclzp mdalzp dcq daq dlq dcalq mdzq mdcazq mdclzq mdalzq mdpq mdcapq mdclpq mdalpq dczpq dazpq dlzpq dcalzpq  Obs  39.383 39.397 31.935 32.527 21.540 22.102 21.639 21.217 24.605 23.325 18.739 18.616 23.234 23.761 19.504 19.750 12.388 12.572 12.378  Levels ms msca mscl msal scz saz slz scalz scp sap sip scalp mszp mscazp msclzp msalzp scq saq slq scalq mszq mscazq msclzq msalzq mspq mscapq msclpq msalpq sczpq sazpq slzpq  12.568  scalzpq  24.309 26.551 19.348 19.602 15.403 13.933 12.519 12.377 14.760 21.411 21.468 21.444  173  Obs 23.889 24.488 20.408 20.352 12.932 12.557 12.427 12.697 19.435 22.057 21.485 21.179 40.089 39.589 32.928 26.898 22.646 22.248 22.097 21.906 23.080 23.305 18.748 19.126 23.871 24.011 20.191 20.789 13.416 12.990 12.678  Levels ds dsca dscl dsal mdscz mdsaz mdslz mdscalz mdscp mdsap mdslp mdscalp dszp dscazp dsclzp dsalzp mdscq mdsaq mdslq mdscalq dszq dscazq dsclzq dsalzq dspq dscapq dsclpq dsalpq mdsczpq mdsazpq mdslzpq  Obs 12.873 13.150 12.707 12.779 37.156 22.852 19.716 22.485 36.245 41.314 34.463 33.978 22.136 21.975 21.094 12.596 43.037 42.889 36.014 35.232 12.920 12.651 11.982 12.290 12.808 13.047 14.514 12.601 23.827 25.356 20.038  12.529  mdscalzpq  19.548  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051521/manifest

Comment

Related Items