Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Multisensor fusion within an Encapsulated Logical Devices Architecture Elliott, Jason Douglas 2001

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2002-0072.pdf [ 15.71MB ]
Metadata
JSON: 831-1.0090205.json
JSON-LD: 831-1.0090205-ld.json
RDF/XML (Pretty): 831-1.0090205-rdf.xml
RDF/JSON: 831-1.0090205-rdf.json
Turtle: 831-1.0090205-turtle.txt
N-Triples: 831-1.0090205-rdf-ntriples.txt
Original Record: 831-1.0090205-source.json
Full Text
831-1.0090205-fulltext.txt
Citation
831-1.0090205.ris

Full Text

Multisensor Fusion within an Encapsulated Logical Device Architecture by  Jason Douglas Elliott  BASc, University of Waterloo, 1999  A THESIS SUBMITTED IN P A R T I A L F U L F I L L M E N T OF THE REQUIREMENTS FOR T H E D E G R E E OF M A S T E R OF APPLIED SCIENCE  in  THE F A C U L T Y OF G R A D U A T E STUDIES  (Department of Mechanical Engineering)  We accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH C O L U M B I A November 2001 © Jason Douglas Elliott, 2001  In  presenting  this  degree at the  thesis  in  partial fulfilment  of  University of  British Columbia,  I agree  freely available for reference copying  of  department  this or  publication of  and study.  thesis for scholarly by  this  his  or  her  the  requirements that the  I further agree  purposes  representatives.  may be It  thesis for financial gain shall not  is be  Nl^HActOLC-AC-  The University of British Columbia Vancouver, Canada  Date  DE-6 (2/88)  K W  1-2  ^Loo  \ .  & ^ 6 i ^ S £ ^ i M  an  advanced  Library shall make it  permission for extensive  granted  by the  understood  permission.  Department of  that  for  allowed  that without  head  of  my  copying  or  my written  Abstract This work is concerned with increasing the efficiency of the implementation, maintainability and operational reliability of automated workcells. These systems consist of a locally contained set of sensors and actuators that are integrated to perform a set of automated tasks. The specific contribution of this work includes the specification of the Heuristic-based Geometric Redundant Fusion (HGRF) method and discussions regarding the fusion of specific classes of complementary sensor measurements. Further, the Encapsulated Logical Device (ELD) Architecture is presented as a framework that facilitates, the systematic and efficient implementation of automated workcells.  This architecture incorporates the  fusion mechanisms proposed in this work. The H G R F method is an extension of the Geometric Redundant Fusion (GRF) method and is capable of fusing m redundant measurements in n-dimensional space. Unlike the GRF method, the uncertainty of the H G R F method's result increases as the level of disparity of the measurements increases. This ensures a realistic estimate of the uncertainty of the data even when unexpected sensor errors occur and when a small amount of data is available, as is common with automation workcells. The uncertainty ellipsoid representation, based on the Gaussian distribution, is utilized by this method. This limits the application. of this method to linear measurement spaces, unless a linear approximation to a non-linear measurement space is acceptable. This work also investigates the fusion of three classes of complementary sensor data. The dimensional extension function is useful for combining measurements taken of the same feature in non-overlapping linear measurement spaces.  The uncertainty modification mechanism is applicable when a sensor's  measurement can be used to modify the uncertainty of another sensor's measurement.  The range  enhancement mechanism is useful for combining different measurements taken in overlapping measurement spaces over different ranges of that space. Additionally, the projection method is a tool useful for manipulating the dimensionality of uncertain sensor data by projecting an ellipsoid into a lower dimension. The E L D Architecture allows systematic and efficient implementation of automation workcells across multiple hardware and software platforms. The E L D Architecture is based upon the Logical Sensor ( L S ) . paradigm and includes sensor and actuator functionality. Within the E L D the uncertainty of all sensor data is quantified and manipulated using the sensor fusion mechanisms detailed in this work. Consideration of the uncertainty of sensor data will enhance the operational reliability of industrial automation workcells.  ii  Contents Abstract  ii  Table of Contents  iii  List of Figures  vii  List of Tables  x,  Acknowledgements  1  2  xi  1  Introduction  1.1  The State of Automation in Industry  1  1.2  Systematic Design and its Benefits  1.3  Towards Robustly Operating Automation Systems  4  1.4  Project Scope and Objectives  .6  1.5  Outline  6  . .4  Literature Review  2.1  8  Sensor, Actuator and Control Architectures  2.1.1  Autonomous Robotic System Architectures  2.1.2  Real-Time Control Architectures  2.1.3  Sensor Integration and Fusion Architectures..  2.2  8 .......8 .9 .........10  Sensor Fusion and Integration  .13  2.2.1  Definitions and Concepts  ........13  2.2.2  Bayesian and Dempster-Shafer Representations  14  2.2.3  Possibility Theory and Fuzzy Logic  14  2.2.4  Geometric Ellipsoid Representation  15  2.2.5  Uncertainty Ellipsoid Derivation  17  2.3  Approach and Limitations  .19  iii  3  Uncertain Data Fusion 3.1  Redundant Data Fusion  21  3.1.1  S o m e Properties o f U n c e r t a i n t y E l l i p s o i d s  21  3.1.2  Geometric Redundant Fusion ( G R F )  23  3.1.3  Heuristic Specification  26  3.1.4  Heuristic Based Geometric Redundant Fusion ( H G R F )  28  3.1.5  Numerical Example  38  3.2  4  20  Complementary Data Fusion  40  3.2.1  Dimensional Extension  41  3.2.2  Uncertainty Modification  3.2.3  Range Enhancement  ....43 • 44  3.3  Projection  •  3.4  T h e E l l i p s o i d Representation and N o n - l i n e a r T r a n s f o r m a t i o n s  3.5  Summary  46 50  •  .........52  The Encapsulated Logical Device Architecture 4.1  53  The Encapsulated L o g i c a l Device  53  4.1.1  Specifications  4.1.2  E L D Components  4.1.3  T h e Sensor  4.1.4  The Manager  59  4.1.5  The Actuator  61  4.1.6  The Executing Thread  4.2  •  .55 .56  ..61  The E L D Architecture  4.2.1  Specifications  4.2.2  E L D Architecture Components  53  • •  62 62 63  4.3  Cross-Platform Implementation  63  4.4  E L D Architecture Builder  4.5  T h e Structure o f the E L D A r c h i t e c t u r e Product D e v e l o p m e n t  4.6  Summary  ..63  ;  64 ••••  ....... 65  5  Redundant Fusion Simulation Results 5.1  Fusion o f T w o One-Dimensional Measurements  5.1.1  6  Fusion o f T w o Two-Dimensional Measurements  5.3  F u s i o n o f Three  5.4  Summary o f Simulation Results  68 ..69  Two-Dimensional M e a s u r e m e n t s  .73 .76  Application Example. A Two-Degree-of-Frecdom Robot E x p e r i m e n t a l Setup  77 77  6.1.1  Physical System  6.1.2  Implementation Platforms  80  6.1.3  E L D Architecture Design  81  6.1.4  G r a p h i c a l U s e r Interface.  85  •.  ...77  6.2  C a l i b r a t i o n o f Sensors  86  6.3  Redundant Fusion Results  87  6.3.1  Figure Eight Trajectory  87  6.3.2  Square T r a j e c t o r y  89  6.4  Summary  Concluding Remarks  ...91  93  7.1  S u m m a r y and C o n c l u s i o n  93  7.2  Recommendations  94  References A  .......65  Discussion o f Results  5.2  6.1  7  65  ELD Architecture Classes A. 1  E L D Architecture Class Design  A.2  Class CArchitecture Description  A.3  Class C E L D Description  A.4  Class CImage Description  96 101 ..101 102 .103 108  A. 5  B  109  A.5.1  E L D M e s s a g e s and N a m e s  109  A.5.2  S e n s o r Input/Output Structures  110  A . 5.3  C o m m u n i c a t i o n Structure  Ill  Matlab Redundant Fusion Algorithms B. l  112  Redundant Fusion Functions  112  B. l . l  T w o Measurement, One-Dimensional Redundant Fusion Function  112  B . 1.2  T w o Measurement, Two-Dimensional Redundant Fusion Function  113  B . 1.3  Three Measurement, Two-Dimensional Redundant F u s i o n Function  116  B. 2  C  Structure and D a t a D e f i n i t i o n s  Test Suites  ..119  B.2.1  T w o Measurement, One-Dimensional Redundant F u s i o n Test Suite  B.2.2  T w o M e a s u r e m e n t , T w o - D i m e n s i o n a l Redundant F u s i o n Test S u i t e . . . : . . . . . . . . : 119  B.2.3  T h r e e M e a s u r e m e n t , T w o - D i m e n s i o n a l R e d u n d a n t F u s i o n Test  Calibration Data C. l  Reference Frame Calibration  C.2  Sensor Measurement Variances  Suite  ..119  121  123 .....123 ..124  VI  r  List of Figures 1.1. A u t o m a t i o n w o r k c e l l w i t h m a n i p u l a t o r m o u n t e d p l a s m a c u t t i n g t o o l 2.1. M o b i l e R o b o t E x a m p l e o f B r o o k s ' S u b s u m p t i o n A r c h i t e c t u r e 2.2. T h e structure o f a L o g i c a l Sensor  .'.  2  .  9 11  2.3. L o g i c a l S e n s o r n e t w o r k f o r a range f i n d e r  ...........12  2.4. Z e r o t h - o r d e r S u g e n o - t y p e F u z z y Inference S y s t e m f o r the F u s i o n o f S e n s o r D a t a  15  2.5. G a u s s i a n P r o b a b i l i t y D i s t r i b u t i o n  16  2.6. T w o - d i m e n s i o n a l U n c e r t a i n t y E l l i p s o i d  18  3.1. T w o - D i m e n s i o n a l G a u s s i a n D i s t r i b u t i o n  22  3.2. O n e - d i m e n s i o n a l f u s i o n o f two measurements u s i n g the G R F m e t h o d  25  3.3. C a s e 1 - N o M e a s u r e m e n t D i s p a r i t y  26  3.4. C a s e 2 - M e a s u r e m e n t s "agree"  27  3.5. C a s e 3 - M e a s u r e m e n t ' s disagree  .27  3.6. C a s e 4 - M e a s u r e m e n t error  28  3.7. O n e - D i m e n s i o n a l M e a s u r e m e n t S p a c i n g V e c t o r s  30  3.8. N - D i m e n s i o n a l M e a s u r e m e n t S p a c i n g V e c t o r s  30  3.9. M e a s u r e m e n t S p a c i n g V e c t o r s and the Separation V e c t o r  31  3.10. C a s e A - F u s i o n o f t w o equal measurements  33  3.11. C a s e B - F u s i o n o f t w o measurements separated b y one standard d e v i a t i o n  33  3.12. C a s e C - F u s i o n o f t w o measurements separated b y t w o standard d e v i a t i o n s  33  3.13. C a s e D - F u s i o n o f t w o h i g h l y separated measurements  33  3.14. U n c e r t a i n t y S c a l i n g F u n c t i o n f o r T w o M e a s u r e m e n t s 3.15. T h e G e n e r a l U n c e r t a i n t y S c a l i n g F u n c t i o n 3.16. R e d u n d a n t F u s i o n N u m e r i c a l E x a m p l e  .35 37 .39  3.17. D i m e n s i o n a l E x t e n s i o n o f t w o o n e - d i m e n s i o n a l measurements  41  3.18. C e r t a i n t y m o d i f i c a t i o n based o n expert k n o w l e d g e  43  3.19. C a m e r a i m a g e range enhancement set-up  45  vii  V  3.20. R a n g e E n h a n c i n g C o m p l e m e n t a r y F u s i o n E x a m p l e .  45  3.21. P r o j e c t i o n and reconstruction o f a t w o - d i m e n s i o n a l e l l i p s o i d  46  3.22. T h e p r o j e c t i o n o f a three-dimensional e l l i p s o i d into a t w o - d i m e n s i o n a l e l l i p s o i d  49  3.23. L a s e r R a n g e F i n d e r M e a s u r e m e n t  50  3.24. M e a s u r e m e n t U n c e r t a i n t y i n (x,y)  Space w i t h L a r g e U n c e r t a i n t i e s  .....51  3.25. M e a s u r e m e n t U n c e r t a i n t y i n (x,y) space w i t h S m a l l U n c e r t a i n t i e s  52  4.1. T h e E n c a p s u l a t e d L o g i c a l D e v i c e ( E L D )  56  4.2. E x a m p l e E L D S e n s o r F u s i o n M o d u l e  58  5.1. F u s i o n o f T w o O n e - D i m e n s i o n a l M e a s u r e m e n t s - Series O n e  66  5.2. F u s i o n o f T w o O n e - D i m e n s i o n a l M e a s u r e m e n t s - Series T w o  67  5.3. F u s i o n o f T w o T w o - D i m e n s i o n a l M e a s u r e m e n t s - R e s u l t s Set O n e  69  5.4. F u s i o n o f t w o t w o - d i m e n s i o n a l measurements - R e s u l t s Set T w o  ....70  5.5. F u s i o n o f t w o t w o - d i m e n s i o n a l measurements - R e s u l t s Set T h r e e  71  5.6. F u s i o n o f T h r e e T w o - D i m e n s i o n a l M e a s u r e m e n t s - E x a m p l e O n e  73  5.7. F u s i o n o f T h r e e T w o - D i m e n s i o n a l M e a s u r e m e n t s - E x a m p l e T w o  74  5.8. F u s i o n o f T h r e e T w o - D i m e n s i o n a l M e a s u r e m e n t s - E x a m p l e T h r e e  .74  5.9. F u s i o n o f T h r e e T w o - D i m e n s i o n a l M e a s u r e m e n t s - E x a m p l e F o u r  75  6.1. X Y T a b l e E x p e r i m e n t a l Setup  78  6.2. T o p V i e w D i a g r a m o f the X Y T a b l e  78  6.3. X - A x i s D r i v e and E n c o d e r  79  6.4. Y - A x i s D r i v e and U l t r a s o u n d Sensor  79  6.5. X Y T a b l e H a r d w a r e Setup  81  6.6. X Y T a b l e E L D A r c h i t e c t u r e  82  6.7. O v e r h e a d Image G r a b b e d b y the Image E L D  83  6.8. X Y T a b l e G r a p h i c a l U s e r Interface  85  6.9. S e n s o r C a l i b r a t i o n Setup  86  6.10. F i g u r e E i g h t T r a j e c t o r y F u s i o n R e s u l t s  ,...88  6.11. E n l a r g e d V i e w #1  88  6.12. E n l a r g e d V i e w #2  89:  6.13. Square T r a j e c t o r y F u s i o n R e s u l t s  90  viii  6.14. Enlarged View #3 6.15. Enlarged View #4 A. 1. ELD Architecture Class Diagram  List of Tables 3.1. E x a m p l e R e d u n d a n t F u s i o n M e a s u r e m e n t D a t a  24  3.2. E x a m p l e R e d u n d a n t M e a s u r e m e n t D a t a  38  5.1. O n e - D i m e n s i o n a l R e d u n d a n t M e a s u r e m e n t s and F u s e d Results - D a t a  67  5.2. F u s i o n o f T w o T w o - D i m e n s i o n a l R e d u n d a n t M e a s u r e m e n t s - D a t a  72  5.3. F u s i o n o f T h r e e T w o - D i m e n s i o n a l R e d u n d a n t M e a s u r e m e n t s - D a t a  75  6.1. U n c e r t a i n t i e s o f X Y T a b l e S e n s o r M e a s u r e m e n t s  87  C . l . Calibration Data  123  C.2. M e a s u r i n g D e v i c e Accuracies  124  C.3. Sensor Variance Data  .124  Acknowledgements This thesis has come to be through the help and support of many people. Firstly, I would like to thank Dr. Elizabeth Croft for her supervision and guidance during my stay at UBC. You are always enthusiastic and helpful. I am grateful for your ideas and your mighty editing pen. To the members of the Industrial Automation Laboratory thank you for your friendship and the many helpful discussions. This work is a result of all of your efforts. David Langlois, thank you for all your hard work. I am continually amazed by your persistence and unselfish help. Sonja Macfarlane, thank you for your friendship and the brain power that you lent to me from time to time. I gratefully acknowledge the financial support that I received from the Natural Sciences and Engineering Research Council of Canada, the BC Advanced Systems Institute and the UBC top up program. This support made this work a reality. I would also like to thank my family. My parents, Dave and Carol Elliott, you are always with me, support me and pray for me. I am grateful for your love. My brother Jeff, you are my best friend. I always look up to you and am impressed by you. To Ronald and Beverly Glauser, you have helped more than you know in getting me to this point. Thank you for yourfriendship,your home and all of your support. I am grateful. Shannon, you are the flower of my life. Thank you for your love. It is my refuge. Lastly, and most importantly, I would like to thank the Lord Jesus without whose help and support I would not be who I am. Thank you.  xi  Chapter 1 1 Introduction 1.1 T h e State of Automation in Industry Automation systems are increasingly common in a wide variety of industries ranging from organic material production in green house operations to the assembly lines of automotive manufacturing facilities.  In 2000 the industrial robot industry generated 5.1 billion dollars US in worldwide sales  bringing the total number of robotic manipulators in use to 742,500 [41]. Automation systems vary widely in design and packaging. Some systems are highly distributed - through a large factory or even globally through a multinational corporation. However, a large proportion of automation systems are workcell based. Automation workcells are locally contained systems, composed of a number of sensors and actuators that are integrated to perform a predefined set of operations. A n example system, shown in Figure 1.1, consists of a robotic manipulator equipped with an end-effector mounted plasma cutter, designed for cutting sheet metal. Typically, such systems are not concerned with the data transmission delays that are inherent in distributed systems.  Figure 1.1: Automation workcell with manipulator mounted plasma cutting tool (Courtesy of the University of Laval Manufacturing Process Laboritory).  The hardware and software used to implement automation systems is also varied. The Programmable Logic Controller (PLC), offered by a number of companies including Seimens and Allen-Bradley, is the de facto standard hardware/software platform for the implementation of automation systems. These systems offer dependable operation, however, they have limited functionality. The utilization of the Personal Computer (PC) platform coupled with peripheral input/output capable hardware is becoming more common. This platform offers low cost, increased processing speed, improved reliability, flexibility and the possibility of integration with a wide variety of other applications and systems. Microcontrollers are also gaining popularity in automation applications for their small size and low cost. The software platforms and tools utilized in automation systems are even more diverse than the hardware platforms described above. Each hardware platform typically supports a number of software choices. Proprietary software packages, such as image processing software tools, are typically designed for specific tasks or setups.  This makes integrating and modifying components difficult, if not 2  impossible. Often the purchaser, who is designing the automation system, is not given access to the internal workings of the system. On the other hand, a basic programming language, such as C++, could be used to design the system. This solution offers flexibility and full access but has no pre-designed functionality. Given the wide range of hardware and software platform choices available it is not surprising that an ad hoc approach is often used for automation system design. Often, industrial automation systems consist of components that are pieced together using dissimilar hardware and software sourced from a number of different companies. As much as 50-60% of the design and implementation time of automation workcells is spent on component integration [41]. The result of ad hoc design is inefficient implementation, poor maintainability and increased obsolescence [35]. The cost of an automation system ranges from thousands to millions of dollars. The labour required for integration of system components amounts to as much as 50% of the overall cost of designing an automation system [41]. Therefore increasing the efficiency of implementation and design would have large economic benefits.  Albus [35] states that the largest financial gains are to be found in system  integration, training and maintenance.  The cost of component integration has impeded developments  towards reliably operating intelligent industrial robotic systems [41]. The maintainability of an automation system depends on factors including system complexity, component interconnectivity and dependence and availability of diagnostic data and tools.  The  complexity of a system naturally increases as the complexity and number of operations it performs increases. However, the complexity also increases with increased variety of hardware and software used, and with a lack of uniformity in system wide design methodology. Furthermore, system maintenance becomes more difficult as the components of the system become more interconnected and dependent on each other. In a highly interconnected system the failure of a single component will cause the failure of a number of other components, making it difficult to pinpoint the source of a failure. Additionally, modifications to a component can potentially cause other dependent components to fail. The maintenance of automation systems is aided by diagnostic tools and the openness of a system's architecture. Diagnostic tools become difficult to implement and use i f the components of the system vary in implementation methodology.  Further, i f the system is not open (meaning that all system  components are viewable and modifiable) maintenance can become difficult as not all operational data may be viewed. A n automation system becomes obsolete when it is not financially viable to continue to use the system or modify its functionality as needed. Slow speed, low accuracy and poor reliability of components, as well as changing system requirements, could cause obsolescence.  3  In this case, the inadequate  components could be upgraded or functionality added to the system. However, if the system cannot be easily and economically modified due to a non-modular design, then the result is obsolescence. An easily modifiable and adaptable design could help to avoid the cost of obsolescence.  1.2 Systematic Design and its Benefits A systematic design methodology encapsulated in an architecture will enhance the implementation efficiency and maintainability of automation systems  [33][41 ].  An architecture is a framework that  specifies the design of the components of a system, the connections between components and the locations of data storage in the system. An automation system architecture should ensure modular component design. A modular design • decreases the system's design complexity by compartmentalizing the design problem, thus increasing the implementation efficiency. Modularity also increases the maintainability and adaptability of the system. A modular component is easily replaced by a new component even if the new component is internally dissimilar to the old component. In fact, components with identical inputs and outputs are completely interchangeable. Further, a modular system's functionality is easily advanced since new components added will not affect existing components. This is a similar argument as to Object Oriented Design (OOD) in software engineering [37]. An automation system architecture should also be open. An open architecture allows the system designer and maintainer to view the internal workings of the components. This permits use of diagnostic tools in design, testing and maintenance. The need for open architectures is recognized as well for CNC machines [35]. A design framework or architecture, applied to automation workcells enables efficient implementation, maintainability and adaptability of the system throughout the entire life cycle of the system.  1.3 Towards Robustly Operating Automation Systems It is important for an automation system to operate reliably under all operating conditions. The ,. industrial environment can be controlled relatively tightly, however, variances can never be fully eliminated. If an automation system does not adapt to changing conditions then operational failure will result.  The failure could be unsatisfactory performance, resulting in defective product and lost .  manufacturing time. Failure could also result in equipment damage, such as broken tooling,fixturingand sensors.  These failures have undesirable financial consequences that should be avoided in an  economically driven industrial environment. The present approach to designing an automation system is to ignore the uncertainty in operating conditions. The variability that the system will encounter is minimized through precise fixturing and by designing operations that are "easy" for the system to perform. Typically, the task level operates in open 4  loop. If a fault occurs with the fixturing, product or automation mechanisms an operational failure results since recovery is not possible. This limits the complexity of practical operations and restrictions are placed on the design of the product itself. This is illustrated in the following example. A plastic automotive fuel tank manufacturer attempted to automate a part deflashing operation. The manual operation involved removing excess plastic from a blow moulded part using a knife. A worker accomplished this immediately after the moulding process while the plastic was still hot. The worker was able to easily follow the pinch line with a hot knife. However, a high occurrence of injuries prompted an automation attempt.  A large Fanuc robot was equipped with a heated knife, a precise fixture was  developed and the desired trajectories were programmed. The automated process had a high rate of failure due to the cooling and shrinking of the part.  Tools were broken and parts were scrapped.  Modifications were made to the knife adding a spring system that would allow the knife to follow the pinch line. The pinch off in the mould was also modified to make the deflashing process easier for the robot. However, in the end the project was scrapped in favor of some knife proof smocks for the workers [10]. While in this particular case, a simple and effective solution was found; environmental, safety or quality concerns may often demand a better automated solution. Coupling the automation system to the environment through an array of sensors can overcome such problems as those encountered in the previous example. In this way the system is able to respond and adjust to changing operating conditions. Instead of hard coding the model of the workspace in the design of the system the actual workspace can be sensed and used as a model [6].  While this approach requires more computational and capital  resources, the result is reliable operational performance with decreased downtime, a higher quality product and the capability of autonomous decision-making. It is also possible to increase the operational reliability of an automation workcell by designing the system to make decisions based upon an appropriate level of uncertainty. This requires quantifying the uncertainty of data for use in functions that direct the operation of the system. Further increases in reliability can be achieved by increasing the quality of sensor data through fusion [25]. Every sensor has a level of measurement noise and bias and could possibility produce erroneous readings. Sensor failure or inaccuracy could result in system failure. Therefore, it is beneficial to use multiple sensors that view the workspace in both a redundant and complementary manner. In this way their knowledge may be fused, thus increasing the accuracy and quality of the data.  This will also  increase the reliability of the operation of the system in the case of sensor fault and enable autonomous recovery from sensor failure.  1.4 Project Scope and Objectives The objective of this project is to make a contribution towards the development of an automation architecture that is efficiently implemented, maintainable and reliably operating. The contribution that this project makes is the specification and implementation of a modular architecture that allows the intelligent integration of sensors and actuators in an industrial automation workcell environment. This work is also concerned with developing tools for fusing redundant uncertain sensor data within this architecture. The integration of complementary uncertain sensor data is only addressed for some specific cases in this work. This work does not address the problem of how to reason with uncertain data or gathering evidence. Instead, this work focuses on providing better data as it is collected from multiple sensors. Further, the use of uncertain data in the control of actuators is not within the scope of this project. However, this topic has been addressed in related work in the Industrial Automation Laboratory [20].  1.5 Outline The structure of this thesis is outlined as follows: Chapter 1  Introduction: This introductory chapter.  Chapter 2  Literature Review: Literature covering the areas of both automation system architectures and fusion of uncertain sensor data. A statement about the approach and limitations of this work is made.  Chapter 3  Uncertain Data Fusion: Tools developed for the fusion of uncertain sensor data. Redundant fusion is investigated in detail and some specific cases of complementary fusion are discussed.  Chapter 4  The Encapsulated Logical Device (ELD) Architecture: The ELD Architecture is presented as an architecture that enables the intelligent integration of multiple sensors and actuators in an industrial automation workcell environment. The architecture is designed to allow the integration of a wide variety of both sophisticated and simple devices.  Chapter 5  Redundant Fusion Simulation Results: The performance of the proposed redundant fusion method is investigated through a variety of simulation examples.  Chapter 6  Application Example: The development of a simple two-degree-of-freedom robot using the E L D Architecture is presented. Redundant fusion within this system is also presented.  Chapter 7  Conclusions and Recommendations: A conclusion to this thesis is given including a summary of the contributions made. Recommendations for future work towards the completion of an effective automation system design tool are given.  7  Chapter 2  2 Literature Review In this chapter, relevant research in the area of sensor and actuator architectures, sensor fusion and data uncertainty is presented. The majority of research in this field has focused on intelligent robotics and, specifically, on autonomous mobile robotics. This thesis utilizes this previous work and applies it to the area of industrial automation workcells.  2.1 Sensor, Actuator and Control Architectures System architectures have been a focus of current research in a number of different fields including autonomous robotic systems, real-time control systems, and sensor integration systems.  Various  architectures, as they have been designed for these fields, are discussed below.  2.1.1  Autonomous Robotic System Architectures  Autonomous robotic systems typically focus on task planning, decision making and world modeling. The goal of this field of research is the eventual development of human-like robotic behaviour. Brooks in [5] proposed the Subsumption Architecture, which provides a purely reactive mechanism for generating system action. Individual behaviours are designed such that when a set of conditions is met the action associated with that behaviour is engaged. For example, a mobile robot would avoid collision with objects in its environment by having a behaviour that would move the robot away from obstacles when one is detected. The individual behaviours are arranged in a hierarchy such that lower level behaviours override higher level behaviours. Figure 2.1 shows an example subsumptive architecture for an exploring and obstacle-avoiding robot. Behaviour based systems such as this experience a tight coupling between  8  sensing and action, creating a system that is responsive to its environment.  The overall intelligent  behaviour of a subsumptive system emerges through a combination of lower level behaviours. While this architecture is effective for small systems it has yet to be shown tractable and reliable on a large-scale industrial automation system.  Reason About Behavoiur of Objects Plan Changes to the World Identify Objects Monitor Changes Sensors  -> Actuators Build Maps Explore Wander Avoid Objects  Figure 2.1: Mobile Robot Example of Brooks'Subsumption Architecture.  Murphy proposed the S F X Architecture in [28] which is based upon the action-oriented perception paradigm. This theory presents sensor fusion as an intelligent process able to set perceptual objectives, determine a sensing plan and adapt to malfunctions. The architecture encompasses the sensory system allowing for the generation and execution of active sensing plans.  Further, this architecture uses a  Dempster-Shafer mechanism for managing uncertainty in sensor measurements [29]. Architectures, such as these, developed for autonomous robotic applications typically treat low-level sensing and control of actuators as a black box. Subsequently, this part of the system is implemented in an ad hoc fashion. Additionally, these systems focus on the ability to emulate human-like intelligence and do not address the issues of industrial applicability such as maintainability and reliability.  2.1.2  Real-Time Control Architectures  The Real-time Control System (RCS) [2] is an architecture designed for intelligent real-time control systems. Further, the N A S A / N B S Standard Reference Model (NASREM) architecture [3], based on the RCS, specifically targets telerobotic system control. These architectures are a hierarchical graph of processing nodes that consist of sensory processing, world modeling and task decomposition modules. The nodes are arranged using characteristic functionality and timing. Components at various levels of the hierarchy are designed for specific functions and there is high interconnection between components.  The Open Control Platform (OCP) is an Object Oriented software architecture designed as part of the Software Enabled Control (SEC) project with the goal of enhancing the ability to analyze, test and develop control systems and embedded software [33].  This system controls the execution and  communication of embedded system application components, provides a simulation environment where components can be tested in a virtual environment and interface to useful design and simulation tools such as M A T L A B . While these architectures have made significant contributions to real-time control system design they do not address the fusion of sensor data or the management of the uncertainty of sensor data. Additionally, these architectures have yet to be shown as usable and feasible in an industrial automation application.  2.1.3 Sensor Integration and Fusion Architectures Sensor integration and fusion research has focused on the combination and utilization of sensor data. Many varied schemes have been proposed that address this problem. Luo and Kay [23] provide a good survey paper of this work.  One approach presented in [22] by Lou and L i n discerns sensor data  utilization based on the spatial proximity of the feature being detected and the level of detail required. According to their work, sensors for robotic applications can be grouped into one of the four categories of 'Far Away', 'Near To', 'Touching', and 'Manipulation'. The level of detail required determines the level of sensing used. A different approach to sensor fusion involves artificial neural networks, which provide a unique, biologically inspired approach to the fusion problem. A neural network model, based on the barn owl's characteristics, has been presented in [32] that fuses visual and acoustic information. Neural networks, while offering advantageous properties such as adaptability and learning, are not maintainable, have a closed architecture and provide a low level of confidence in reliability of operation over the entire operating range. At a lower level, the National Institute of Standards and Technology (NIST) and the Institute of Electrical and Electronics Engineers (IEEE) have proposed the IEEE-PI451 Standard for a Smart Transducer Interface for Sensors and Actuators [15][16]. This work is concerned with developing a common communication interface at the transducer level to enable the plug and play of sensors and actuators in a network of smart transducers. This work only addresses the physical connection between sensors and actuators and does not specify how to design an automation system or how to integrate sensor data in an automation system.  10  2.1.3.1 Logical Sensor Architectures The Logical Sensor (LS) has been chosen by a number of researchers as the base for development of sensing and control architectures. Henderson and Shilcrat first proposed the Logical Sensor System in [11], later adding a controlling signal to the architecture in [12]. The structure of the L S is shown in Figure 2.2. Characteristic t Output Vector  Control Commands!  Logical Sensor Name Selector  Program 1  Program n  Logical Sensor Inputs  Logical Sensor Inputs  Control Command Interpreter  Commands to Logical Sensors  Figure 2.2: The structure of a Logical Sensor (Adapted from [12] Figure 2).  A L S is an abstraction of a sensor where the physical sensor is separated from its functional use. LSs can be designed to sense a specific feature and the detection mechanism is transparent to the overall system. For example, a watermelon detecting LS could be designed that processes the data from an array of inputs such as colour, mass, volume and shape logical sensors. Each L S is an encapsulated module that contains all knowledge and functionality required to operate as an independent sensing entity. A L S architecture is formed by the hierarchical combination of a number of LSs. Within the architecture each L S is only able to communicate to those LSs that it is directly connected to. The properties of the L S make this architecture scalable and modular and following the introductory discussion of this paper, this is important for industrial applications. A n example of a LS architecture for a range finder that utilizes stereo vision and an ultrasonic sensor is shown in Figure 2.3.  11  Range Finder  r r Stereo  Fast Stereo  Slow Stereo  t / I  1  1  Camera 1  • 1  Ultrasonic Sensor  1  Camera 2  Figure 2.3: Logical Sensor network for a range finder (Adapted from reference [25] Figure 7). Weller et al. [40] integrated error detection and recovery into the L S specification using knowledge bases localized in each L S . Zheng [43] integrated a L S network into a hierarchical robot control architecture. The reported advantage of using LSs in this application was the increase in intelligence at all levels of the control architecture due to the sensing available. In Zheng's work the LS hierarchy was separate from the control structure hierarchy. Later Budenske and Gini [8] included actuation in their Logical Sensor/Actuator architecture.  However, this work focused on plan execution through the  decomposition of a high level abstract goal into a real-time detailed plan and was not concerned with the real-time control of actuators. Naish [30] presented the Extended Logical Sensor Architecture (ELSA) where feature based object models were used to construct the architecture for a specific sensing application. The architectures discussed are useful for the integration of multiple sensors in an intelligent automation system. The following section focuses on sensor fusion and integration, and specifically, on mechanisms used for propagating uncertainty throughout an architecture.  12  2.2 S e n s o r Fusion and Integration The use of multiple sensors results in increased sensor measurement accuracy as is shown by Richardson [36]. He proves that additional sensors will never reduce the performance of the optimal estimator. Further, system operational reliability will increase with additional sensors since the system is more resilient to sensor failure [25]. It is also possible to perceive features with multiple sensors that are not perceivable when using a single sensor. These benefits enable increased intelligent behaviour in an automation system. The following section presents definitions and background concepts useful in the area of sensor fusion and integration. A discussion of the relevant sensor fusion approaches involving uncertain data follows, including Bayesian and Dempster-Shafer Theories, Possibility Theory and Geometric Uncertainty Ellipsoids.  2.2.1  Definitions and Concepts  Sensor fusion is defined as the combination of data from multiple sensor sources into one representational format. Sensor Integration has been differentiated from sensor fusion in [25] to be the synergistic use of information from multiple sensor sources to aid in the operation of a system. In this framework fusion is concerned with the actual combining of data whereas sensor integration is the general use of multiple sensors with the goal of obtaining more intelligent sensing at all levels of abstraction. There is also a distinction made regarding the level at which data is fused. There are three categories: data fusion (low-level), feature fusion (intermediate-level) and decision fusion (high-level) [17]. The data level typically involves fusing (mostly unprocessed) sensor data such as encoder and range finder outputs. . The feature level involves fusing higher-level information, such as the colour or shape of an object. At the highest level, decision level fusion involves fusing information such as 'the object is a red apple' with 'the object is a semi-red apple'. Information from multiple sources can be classified as independent, redundant or complementary [25]. Independent information is entirely unrelated and is not useful for fusion. Redundant information is the result of multiple sensors viewing the same feature, in the same space, possibly using entirely different means. Complementary information is the result of multiple sensors viewing a feature in different spaces or different ranges of a space. Sensor Data Uncertainty is a quantification of the limits of error in a measurement. Uncertainty results from many sources of imprecision and bias and is inherent in sensory systems. Therefore a well designed automation architecture, together with its fusion and integration mechanism, must be able to cope with uncertainty.  A workshop on Multisensor Integration in Manufacturing Automation [13] concluded, in  studying sensor uncertainty, that uncertainty should be represented at all levels of the architecture and that  13  it should be explicitly represented.  Two fundamental questions were fonnulated at the workshop; (1)  which formal description of uncertainty should be used in a multisensor fusion architecture; and (2) what mechanism should be used to move uncertain information through the architecture. There have been a number of proposed approaches to these problems including Bayesian and Dempster-Shafer Theories, Possibility Theory and Geometric Uncertainty Ellipsoid based methods. These are discussed below.  2.2.2 Bayesian and Dempster-Shafer Representations In the area of expert systems and decision support systems the problem of drawing conclusions from incomplete and uncertain data has been investigated at length.  This work centers on decision-level  fusion. The problem addressed in this work is usually formulated as determining the uncertainty of a fact in the light of new additional evidence or information. Both Bayesian and Dempster-Shafer approaches have been used in this application and the superiority of one method over another has been debated [21][23]. The Bayesian method is based in probability theory. This method has been criticized for the large amount of statistical data required to design a reliable system [38]. Further, Bayesian theory fails to distinguish between statistical uncertainty and ignorance or lack of knowledge. There is an implicit assumption that all relevant probabilities are known with precision while the probabilities themselves describe random events [38]. On the other hand, the Dempster-Shafer model allows for differentiation between disbelief and the lack of belief, that is, between having evidence that disconfirms a hypothesis and a lack of evidence that confirms the hypothesis. Murphy [29] provides a method for sensor fusion at the symbol level using Dempster-Shafer theory.  2.2.3 Possibility Theory and Fuzzy Logic Fuzzy logic was introduced as a method that models the vagueness of natural language [39]. Fuzzy logic does not separate logic and probability and is a descriptive language for modeling uncertainty [38]. There have been many fuzzy approaches to the sensor fusion problem [42][9].  Mahajan et al. [26]  presented a fuzzy inference system that integrates sensor measurements of operating conditions, such as temperature, as weighting factors utilized in redundant sensor fusion. As is shown in Figure 2.4, the operating parameters of strain and temperature are used to determine the contribution of a measurement in redundant fusion. The uncertainty of a measurement is essentially determined though expert knowledge and complementary sensor measurements.  14  1. Inputs Fuzzification  \,ow  V.  2. Fuzzy Operations  1 \low  0  t  t  0  ;  3. Fuzzy Implication  low  J  1  •  •  [__•  0  jm  1. I f strain is low o r temperature is low then weight is low  J  1  tokay  >  J  0  -100  +200  2. I f strain is okay and temperature is okay then weight is high strain= 175 Input 1  Temperature-10  4. D e f u z z i f i c a t i o n  Input 2  Output 0 weight=59.4  Figure 2.4: Zeroth-order Sugeno-type Fuzzy Inference System for the Fusion of Sensor Data (Adapted from |26| Figure 3). Fuzzy logic is a useful tool in such situations as it is not always necessary or beneficial to exert energy and cost into precisely defining relationships between sensor measurements and Operating conditions. Further, expert knowledge about the effect o f parameters on the certainty o f sensor data can be embedded in the form o f fuzzy rules.  2.2.4  Geometric Ellipsoid Representation  The Geometric Ellipsoid Representation is a geometric parameterization o f probability density functions. This representation is based upon the assumption that uncertainty can be quantified by using Gaussian probability density functions. This assumption is made based on computational ease since this distribution is easily parameterized using standard deviation, a , and mean, p.  The Gaussian curve is  shown in Figure 2.5 and is given as, I  2;rcr  15  2<r-  (2.1)  fi(x) 0.4/  J0  3-  1 0.2-  /  -i  r—|—'1  I  '  0.1-  1—1—1—•—i—rn-  Figure 2.5: Gaussian Probability Distribution ( u=0, o=l).  The geometric ellipsoid representation has been chosen as the most suitable representation of uncertain sensor data for this application. Contributors to the Multisensor Integration in Manufacturing Automation Workshop agreed that probability models best suit data at the physical or geometrical level and that another representation may be more appropriate at the symbol level [13]. The conceptual simplicity that uncertainty ellipsoids allow, especially in higher dimensional spaces, makes industrial acceptance of such a system more likely.  Further, being a simple parameterization of the well established Gaussian  distribution provides a firm statistical basis. This allows the quality of sensor data to be expressed using a standard deviation measure determined through experimentation. However, the Gaussian assumption is criticized since random processes do not necessarily follow the Gaussian distribution. For example, laser range finders have been noted not to follow such a distribution [13].  The choice of which distribution to use is usually concerned with efficiency and simplicity rather  than the "real" distribution [7]. The tradeoff between having highly accurate distributions and having a simple and useable system is a decision that is best determined by the application. A system that is . complicated and inefficient will never find acceptance in industry. Marzullo proposed a simple geometric based fusion method using upper and lower bounds for each sensor measurement [27]. This method was based on the assumption that if two sensor measurements are correct then their intervals will overlap. The overlapping region of the measurements was taken as the most reliable measurement under the assumption that a limited number of sensors could fail. This method did not indicate how the sensor bounds are determined and had no basis in probability theory.  16  Nakamura proposed a Geometrical Redundant Fusion (GRF) method for multi-sensor robotic systems in [31]. This method is capable of fusing any number of //-dimensional measurements.  This method is,  based upon the Uncertainty Ellipsoid, as described in the following section, and it assumes that measurement noise follows a Gaussian distribution and that there is no bias in the measurements.  The  fusion method is statistically based and is computationally efficient. Additionally, as shown in [31], the method results in an algorithm similar to those obtained by Bayesian inference by minimum variance estimate, Kalman filter theory and the weighted least squares estimate. The derivation of the Uncertainty Ellipsoid is given below while the details of the derivation of Nakamura's method are left until Chapter 3.  2.2.5 Uncertainty Ellipsoid Derivation Consider m sensors measuring the same feature in the same //-dimensional domain. The /''' sensor measurement is given as x„ which is an //-dimensional vector.  The uncertainty of the measurement can  be written as follows:  x. =x + Sx, where x is the true measurement and dx  is the i" measurement's noise, which is assumed to follow a  Gaussian distribution. The mean of Sx , Sx variance of Sx  (2.2)  is 0 as a result of the Gaussian assumption and the  is given as follows: VanSx \=  (2.3)  where  2  cr is the covariance matrix which is referred to as an //-dimensional uncertainty ellipsoid matrix ;  defined in the principal axes frame, 3  . The sensor measurement, x„ is transformed according to the  following equation,  which can be linearly approximated by,  y. =f(x ) + J(x )&c , l  i  i  (2.5)  where,  dx.  17  (2.6)  is the Jacobian matrix of / with respect to x,. The mean, y , and variance of the transformed sensor measurement is then given as, y .=/(£,)  (2-7)  Var\y\=J o}j .  (2.8)  T  p  Therefore, in general the uncertainty ellipsoid is a fully populated matrix and only becomes diagonal when expressed in the principal axes frame. The diagonal entries of the uncertainty ellipsoid matrix (the eigenvalues) expressed in the principal axes frame define the dimensions of the principal axes of the ellipsoid.  Figure 2.6 displays a two-dimensional uncertainty ellipsoid with its associated uncertainty  ellipsoid matrix.  i p  y  P  =L °U / 3 ~  1  2  r\ ~|  /  0  2  \  0  Figure 2.6: Two-dimensional Uncertainty Ellipsoid. One can note that the assumption in equation (2.5) requires that the transformations be linear or suitably approximated as linear in the region of interest. This is a somewhat restrictive assumption that is discussed in more detail in Chapter 3. Additionally, some very useful transformations are derived in Chapter 3 as well. Lee and Ro in [22] present a method of automatically reducing uncertainties in a robotic sensory system based upon geometric data fusion. A network of feature transformation, data fusion and constraint nodes manage the propagation of uncertainties throughout the sensing levels. Lou and Lin in [24] developed a redundant fusion method that utilized distance measures that specify the relative agreement or disagreement between sensors. If the measurements are close enough together, as defined by a chosen threshold, then they are useful for fusion. Measurements that are not close together are suspected to be incorrect. The largest group of agreeing sensors is utilized for fusion and the  18  others are discarded. This method is based upon a one-dimensional Gaussian distribution for sensor uncertainty and is suitable only for fusing one-dimensional data. The perimeter of the uncertainty ellipsoid, as derived above, is set at one standard deviation, l a . This results in a 0.68 probability that the ellipsoid contains the true value. Alternatively, the uncertainty ellipsoid could be sized to a a , where a is a constant, i f a larger or smaller range of allowable uncertainty is desired. In this work the l a uncertainty ellipsoid (a = 1) is used, as is common in the literature [22][31]. However, the same method could be used for other values of a.  2.3 Approach and Limitations Given the scope and objectives of Section 1.4, the LS has been chosen as the concept on which the E L D Architecture is based. It is the most applicable and sufficiently general framework available. This is evidenced by the number of researchers that have chosen this framework as the base for their developments. The reactive properties of the subsumptive architecture and the real-time considerations of Albus' work can be incorporated into the LS framework i f desired.  The similarities of all these  approaches are apparent. Further, the geometric ellipsoid representation has been chosen as the most suitable representation of uncertain sensor data for this application. The conceptual simplicity that uncertainty ellipsoids allow, especially in higher dimensional spaces, makes industrial acceptance of such a system more likely. Further, being a simple parameterization of the well established Gaussian distribution provides a firm statistical basis. However, relying on a Guassian distribution limits the type of sensor uncertainty that can be accurately modeled. This is a trade-off that is commonly accepted in industrial processes. The use of a geometric uncertainty ellipsoid representation does not preclude the use of Bayesian or Dempster-Shafer theories to govern higher symbolic level uncertain reasoning and evidence gathering. This, however, is beyond the scope of this work. Fuzzy Logic on the other hand, may be a good tool for this application. Fuzzy systems offer simple conceptualization and flexibility in the distributions used to represent uncertainty. However, a statistical base is the preferred choice for a first implementation due to its historical acceptance. Nakamura's G R F method [31] has been chosen as the most applicable redundant fusion mechanism available. This method is extended in this thesis and is used as a tool in the E L D Architecture. Its dimensional versatility make an ideal fit in the chosen framework.  19  Chapter 3 3 Uncertain Data Fusion Considering the range of approaches that result from the variety of fusion problems encountered in the literature discussed in Chapter 2, proposing a single algorithm for the fusion of all possible types of data is not feasible. This chapter details some sensor fusion mechanisms that are applicable to specific data cases and can be used within an industrial automation workcell architecture. Specifically, a redundant fusion mechanism that properly combines sensor data regarding the same feature is detailed. Based on a careful examination of this method, an expanded, more representative approach for redundant data fusion is presented. In the second section of this chapter, tools for specific cases of complementary fusion are outlined. Specifically examined herein are three complementary fusion mechanisms, namely: dimensional extension, uncertainty modification and range enhancement.  The dimensional extension function is  useful for combining measurements taken of the same feature in non-overlapping measurement spaces. The uncertainty modification mechanism adjusts the uncertainty of a sensor's measurement based on another sensor's measurement. The range enhancement method combines different measurements taken in different ranges of overlapping measurement spaces. In the latter portion of this chapter a method for projecting an ellipsoid into a lower dimension is outlined. This method is useful when only some dimensions of an ^-dimensional measurement are needed for fusion or processing. As was stated previously, the fusion functions developed herein assume a Gaussian distribution for the uncertainty of all sensor data. This is a consequence of the uncertainty ellipsoid representation. Further, only transformations that are linear in the coordinates of the measurement space are valid for ellipsoids.  20  This restricts the application of the proposed ellipsoid based methods to linear measurement spaces. This restriction and its limitations are discussed further in this chapter.  3.1  Redundant Data Fusion  Redundant measurements are quantifications of the same feature in the same domain.  The  mechanisms used to generate redundant measurements can be completely dissimilar. For example, a laser range finder, an ultrasound sensor and a digital camera can redundantly measure the position of an object's surface along an axis. However, i f these sensors measure the position along different axes, then the measurement's spaces differ and the data is not fully redundant. The fusion of sensor data can result in an increase or decrease in uncertainty associated with the fused data. In either case this is an increase in the overall knowledge of the system. For example, if redundant measurements "agree" then the knowledge is reinforced and the uncertainty of the fused result is less than that of the individual measurements.  However, i f the measurements do not "agree", then there are  possible sensor faults. Without additional knowledge to indicate which measurements are faulty, the uncertainty of the fused measurement increases. A n informal quantification of sensor "agreement" is given in this section. The practical benefits of redundant sensors are maximized when sensors use entirely different sensing mechanisms. If sensors are affected by the same environmental parameters then no reliability gains are made with respect to those parameters.  Consider two vision based sensors. Both sensors will yield  inaccurate results when lighting conditions change dramatically. However, i f one sensor was replaced with a non-vision based sensor, then a reliable measurement would be available in all lighting conditions. As described in Chapter 2, Nakamura [31] proposed the GRF method for multi-sensor robotic systems. This method is capable of fusing any number of «-dimensional measurements. It is a statistically based method that assumes measurement noise follows a Gaussian distribution and that no bias exists. Typically, automation workcells have a small number of sensors and redundant fusion is performed with a limited amount of data. In such cases a statistically based method does not perform satisfactorily and thus an extension to the GRF method is required that utilizes heuristic based information. The following discussion begins with some useful properties of ellipsoids and the development of the GRF method.  The shortcomings of this method are discussed and a Heuristic based Geometric  Redundant Fusion (HGRF) method is proposed as an extension to the GRF method. The H G R F method allows reliable redundant fusion of a limited amount of data that may contain a bias.  3.1.1 Some Properties of Uncertainty Ellipsoids The uncertainty ellipsoid uniquely and exactly parameterizes a Gaussian distribution. This is apparent when the uncertainty ellipsoid is viewed as a topographical contour line (or surface in higher dimensions) of a Gaussian distribution. Consider the two-dimensional Gaussian distribution shown in Figure 3.1 and given as follows:  21  Figure 3.1: Two-Dimensional Gaussian Distribution (mean =0, mean =0, a =l, a =l). x  y  x  y  A contour of the two-dimensional Gaussian curve of height h is given as ' x y 2  2  t  2<T 2<JI  1  2 +  (3.2)  x  e  h=  K  2na a x  y  After some manipulation the equation becomes x  2  a]  + ^ = -2\n(2hmr a )=c. a; T  x  (3.3)  v  Equation (3.3) can be recognized as a two-dimensional ellipsoid. With the contour height, h, specified the ellipsoid uniquely parameterizes the Gaussian distribution without approximation. The uncertainty ellipsoid matrix is a dyad [14]. Therefore, a similarity transformation can be utilized to express the uncertainty ellipsoid in any linear reference frame. For example, to express an uncertainty ellipsoid expressed in the principal axes frame, 3 , in another frame, 3 , the following similarity m  transformation would be utilized: a = R„ cr R , 2  2  m  m— i  m  (3.4)  T  m  P p — i  m  n  P '  v  '  where ,R is a rotation matrix defined between frame 3 - and 3 . Scaling of an uncertainty ellipsoid in k  ;  t  a frame of interest, 3 , requires applying the scaling matrix, „,S, of the following form to the uncertainty OT  ellipsoid:  22  0  5,  0  .s 2.-.  0  (3.5)  where s-, is the magnitude o f the scaling o f the ellipsoid in the direction o f the i' axis o f 3 h  an uncertainty ellipsoid expressed in 3  m  . Scaling o f  would take the form:  III  =,„S I"  CT  — It  III  (J.  —• I  „,S "'  (3.6)  The rotation and scaling operations can be combined as in the following example. A n ellipsoid, whose principle axes are rotated with respect to the world frame, 3 . , is to be scaled in the world frame and then U  expressed in another frame o f interest, 3,„. The transformation is given below as: <L m w =  m  3.1.2  w { ,, <l]  R  S  s  R  w  p  w)w  R P  ml  sT  R  (3-7)  •  Geometric Redundant Fusion (GRF)  The G R F method o f fusing  m uncertain //-dimensional data points, x. =[x, •••x„\,  is based on a  weighted linear sum o f the measurements as follows:  5  /  (3.8)  •  where x . is the fused value, and W-, is the weighting matrix for measurement /', x..  A p p l y i n g expected  values to (3.8) and assuming no measurement bias yields the condition in  (3.9)  1=1 For a given data set, the weighting matrix, W, and the covariance matrix, Q, are formed as follows: W = [W  X  W  2  .2  •••  W ], m  (3.10)  0 (3.11)  Q CT  0  2  w m  where ,cr. is the uncertainty ellipsoid matrix for measurement/. M  Nakamura proposed that minimizing the volume o f the fused uncertainty ellipsoid w o u l d result in more accurate and less uncertain information. The volume o f the fused uncertainty ellipsoid is given as: n/2  V =  ^dzt'WQW ), 7  r(l + m/2)  23  (3.12)  where T(-) is the gamma function. The problem simplifies to minimizing det(IVQW ) subjected to the T  constraint of equation (3.9). Using Lagrange multipliers the following result for the weighting matrices and fused certainty or variance is obtained: f  w, =  z u r ur. . A"'  m .  (3.13)  (3.14)  - H E U ) ~ J  In the one-dimensional case with two measurements the above result becomes: .2.. , _2. <T x + ,„crX| 2 2  w  t  2  (3.15)  2  (3.16) Examples of the above case, for the measurement data given in Table 3.1, are shown in Figure 3.2 below.  Measurement  Mean  Variance  Measurement 1 Measurement 2  7 12 8 7  0.25 1 0.2 0.25 1 0.2  G R F Result (1 and 2)  Measurement 3 Measurement 4  8  7.2  G R F Result (3 and 4) Table 3.1: Example Redundant Fusion Measurement Data.  24  Figure 3.2: One-dimensional fusion of two measurements using the GRF method. Top - spatially separated measurement means. Bottom - close proximity of measurement means.  While the GRF method handles the fusion of m measurements in w-dimensional space in an efficient manner, it does not include information about the spacing of the means of the m measurements in the calculation of the fused variance. That is, the magnitude of the spatial separation of the means is not used in the calculation of the uncertainty of the fused result. Therefore, the GRF method provides a fused result with identical uncertainty independent of whether the measurements have identical or highly spatially separated means. This is demonstrated by comparing the top and bottom graphs in the previous one-dimensional example, Figure 3.2. The GRF method is based on a Gaussian assumption. Therefore, its results are most reliable when a large population of measurements exists. A small population of measurements (e.g. two or three), as 25  would be expected in many automation workcell fusion applications, does not provide sufficient data to establish a Gaussian distribution. Therefore, when fusing a small number of data points, a fusion method based purely upon statistical quantities provides misleading results. This is evident in the case of highly separated measurement means, Figure 3.2. To overcome this problem it is proposed that a heuristic function be introduced to extend the G R F method. This heuristic will include information about the separation of the means of the measurements in the uncertainty of the fused result. As a result, the output is no longer purely statistically based, but can still provide a reasonable measure of the increase or decrease of uncertainty of the fused data.  3.1.3  Heuristic Specification  The desired heuristic would modify the G R F result so that reliable results are produced for all ranges of measurement disparity. The desired output of the heuristic is discussed below using four separate cases.  The heuristic output should smoothly transition from one case to another.  The heuristic  specification cases should be interpreted along a dimension of the measurement space. For example, a particular measurement may fall into Case 2 along one dimension and into Case 3 along another. Illustrations of the fusion of two two-dimensional ellipsoids with parallel principal axes have been included as descriptive aids.  In a general multi-measurement multi-dimensional space the following  discussion becomes difficult to intuitively interpret since the measurements' ellipsoids do not necessarily have aligned principal axes. Consider a set of m measurements with means, the ellipsoid,  m  erf, centered at  m  X., and associated uncertainty regions, S , based on t  x and described in frame 3 . t  m  Case 1: No measurement disparity along dimension d,  , i = \,..., m .  Heuristic Output: For this case the heuristic uncertainty region, (S/),/, should be equivalent to the uncertainty region generated by the GRF method, namely,  Figure 3.3: Case 1 - No Measurement Disparity.  26  (S/) i =(S ) , along dimension d. t  N  cl  Case 2: Measurements "agree" along dimension d, ( x.) e {s •) V/', /, j * 7*. m  t  /  Heuristic Output: The heuristic uncertainty region, (S^,,, should be smaller than the minimum sized measurement uncertainty region, (S„,J , along dimension d. Alternatively stated, in this case the d  uncertainty decreases.  Measurement 2  Heuristic R e s u l t \ /  /Measurement 1  Xf  1 1  % T T \ —  G R F Result  A—  /  S/— S2=S ,i n  n  Figure 3.4: Case 2 - Measurements "agree".  Case 3:  Measurements "disagree" along  dimension  d,  ( x.) i{s )\/i,j,i m  t  f  * j and  Heuristic Output: The heuristic uncertainty region, (Sj) , should be larger than the minimum sized lt  measurement uncertainty region, (Smujd, along dimension d. Alternatively stated, in this case the uncertainty increases.  Figure 3.5: Case 3 - Measurement's disagree 27  Case 4: Measurement error along dimension d, {S, \ n {s^) e OV/, /, / * _/ . t  Heuristic Output: The resulting fused uncertainty ellipsoid should encompass the entire range of the measurement ellipsoids along the dimensions of measurement error.  In this case the increase in  uncertainty is strongly indicative of measurement error.  Figure 3.6: Case 4 - Measurement error.  In summary, it is proposed that the G R F method be revised to compensate for measurements with varying levels of disparity, due to bias or error. Ultimately, the output of the heuristic should be an adjustment of the G R F method's fused uncertainty ellipsoid.  This adjustment should correctly  correspond of the level and direction of measurement uncertainty as reflected in the level and direction of disparity between the original measurements. The development of a heuristic method that possesses these properties is given in the following sections.  3.1.4 Heuristic Based Geometric Redundant Fusion (HGRF) The derivation of the fusion heuristic begins with the simple case of fusing two one-dimensional measurements. Then the method is extended to the fusion of m measurements that are w-dimensional. The overall approach employed in the following heuristic methods is as follows. Each step will then be explained in detail in the following sections. I) Calculate the Fused Mean, x , f  equations 4.8, 4.13 and 4.14.  and Fused Variance, crj,, using the G R F method given in This yields the fused result with no adjustments for  measurement bias or error.  28  2) Calculate the Measurement Spacing Vectors, v. , that quantify the vector distance between the measurements and the Fused Mean, x . f  The Measurement Spacing Vectors provide ah:  indication of the relative disparity of each measurement. 3) From the Measurement Spacing Vectors, v , the Separation Vector, k, is calculated. The Separation Vector quantifies the overall separation of the measurements with respect to the Fused Mean, x . The objective of this step is to quantify the level of disparity between all the f  measurements. 4) Apply a Uncertainty Scaling Function, f (k), to the Separation Vector, k, to determine the c  Scaling Factor, C(k). variance, a' , N  The Uncertainty Scaling Function determines how the G R F fused  should be adjusted.  This adjustment depends on the size of the Separation  Vector, k . 5) Apply the Scaling Factor, C(k), to the G R F fused variance, a  2  reference to get the H G R F fused variance,  , in an appropriate frame of  a .. 2  3.1.4.1 Measurement Spacing Vectors Figure 3.7 illustrates the Measurement measurement case.  In this case v  Spacing Vectors, v , for the one-dimensional, two  is one-dimensional (a scalar).  Figure 3.8 illustrates a twO-  dimensional, two measurement case. The Measurement Spacing Vectors represent the amount of scaling of the standard deviations of the measurement, a., required such that the uncertainty ellipsoids of each measurement approach the fused mean, x . f  In general, the required scaling is not uniform but instead is  an w-dimensional scaling that rotates and magnifies the ellipsoid. mean given by equations 4.8 and 4.13.  x  f  is defined as Nakamura's fused  Based on the above description, the Measurement Spacing  Vectors are defined using the following vector relation: x,=x.=a.v.+x..  (3.17)  Rearranging equation 4.17 yields the Measurement Spacing Vectors, v , for an «-dimensional space:  There is a one to one correspondence between Measurement Spacing Vectors and measurements. These vectors can be viewed as quantifing the distance of each measurement from the fused mean. This distance measure is normalized by the size of the uncertainty ellipsoid in a multi-dimensional fashion.  29  This group of Measurement Spacing Vectors is used to determine the Separation Vector, k.  The  establishment of this vector is considered in the following section.  <J7  07 V/  0 XI "  '  T  Xfus  Vj  f X2  r  1  Figure 3.7: One-Dimensional Measurement Spacing Vectors.  Figure 3.8: N-Dimensional Measurement Spacing Vectors.  3.1.4.2 Separation Vector and the Scaling Reference Frame The Separation Vector, k , is a vector that compiles the individual contributions of the Measurement Separation Vectors, v . Thus k dictates the "stretch direction" and is used to determine the scaling applied to the GRF fused variance, <r . The Scaling Reference Frame, 3k, is the frame in which the GRF result is scaled. This will be explained further in this section. Figure 3.9 shows n Measurement Spacing Vectors, v , and the Separation Vector, k , for the three-dimensional measurement space case.  30  Figure 3.9: Measurement Spacing Vectors and the Separation Vector.  The components of the Separation Vector, k, and the Scaling Reference Frame, 3 , are determined k  using a variation of the Gram-Schmidt orthogonalization process [18] on the set of Measurement Spacing \ Vectors {v , / = 1 ...m}. The procedure is as follows: 1) The first axis of the Separation Vector, & , is set to the magnitude of the largest v as depicted in Figure 3.9. The following equation provides this result: = max(|v|)/ = l.../«.  (3.19)  2) The second axis of the Separation Vector, k is set to the largest projection of a v on a plane normal 2  to v . This is accomplished by the following equation: f  f *2  = max v.  v. • k, —i  —i  \  i = \...m,i±l.  *.  (3.20)  /  3) The remaining axes of the Separation Vector, k , are determined similarly according to the following equation:  31  k . = max  jti  ~J-  k,»k~  k.*k  l  - i  .1  /  ]  n L ^  -./-i  (3.21)  c  -•>-*  -./-I  where v has not been used previously to determine an axis of k . The Scaling Reference Frame, 3 , is defined by the set of orthogonal unit vectors along \k.\. k  If the  set of Measurement Spacing Vectors, {v , i = \ ...m), does not span the space of the measurements then the remaining components of k are set to 0 with their corresponding axis of 3 chosen to be orthogonal k  to the set of unit vectors \k }.  3.1.4.3 The Uncertainty Scaling Function The following discussion details the development of the Uncertainty Scaling Function, f (k). c  This  scalar function uses the information contained in the Separation Vector, k , to determine an appropriate Uncertainty Scaling Matrix, C = f (k), to apply to the GRF result. This function is developed to satisfy c  the heuristic specification of Section 3.1.3. The discussion begins with a review of some important details of the GRF method and then develops the Uncertainty Scaling Function for a simple case of two measurements of equal uncertainty. The function is then extended to the general case of multiple measurements. In the GRF method, the maximum decrease in data uncertainty occurs when the uncertainties of the input measurements are equal. The maximum decrease in uncertainty, for the one-dimensional, mmeasurement case, is by a factor of l/-Jm . This can be seen by setting cr, = <r = • • • = a 2  (3.14).  On  the  other  hand,  when  all but  one  measurement  has  m  infinite  in equation uncertainty,  (<r • =o~ ,o~ ,...,o~ ,...,o~ = o o , r V / ) , then equation (3.14) returns a result with an uncertainty equal to 0  l  i  m  the minimum uncertainty of the input measurements, (a  N  a fused standard deviation ranging between <r of the measurements. Here, a  min  min  - er  min  ) . Therefore, the GRF method results in  and <x / V w , depending on the relative uncertainties mill  denotes the minimum standard deviation of the input measurements.  This result correctly applies to the specific case of equal measurement means. Consider Figure 3.10 - 3.13, cases A through D. Two one-dimensional measurements, x/ and x , of 2  equal uncertainty are being fused. The GRF method produces results with equal levels of uncertainty [p  N  = cr  niin  /V2) for all four cases, shown centered about x . N  32  f  x  w  Figure 3.10: Case A - Fusion of two equal measurements.  ^  CJ  i  ? X2  X,  Figure 3.11: Case B - Fusion of two measurements separated by one standard deviation.  J  •4  Yx  Xi  2  Figure 3.12: Case C - Fusion of two measurements separated by two standard deviations.  •<  6 T  J  1  1  ±  •  x,  Figure 3.13: Case D - Fusion of two highly separated measurements.  In case A, measurement x, and x are equivalent. The Separation Vector, k is 0. Both the Separation 2  Vector, k, and the Uncertainty Scaling Factor, C(k), are scalar in the one-dimensional case and are thus written as scalars in this section. According to Case 1 of the Heuristic Specification of Section 3.1.3 the  33  GRF method without adjustment is applied at k=0. Therefore the Uncertainty Scaling Matrix, C(k = 0), should be unity in this case. In case B, each measurement, x,-, is on the border of the other's uncertainty region, S^. From equation (3.18) and the algorithm in Section 3.1.4.2, k=0.5 for this case. This case represents the borderline between Heuristic Specification cases 2 and 3. That is, the measurements are between "agreement" and "disagreement". According to the Heuristic Specification, the results of fusion in this case should result in neither an increase or decrease in uncertainty of the measurements. Therefore, the GRF method provides a result that is too optimistic for this case. Applying a scaling of C(k = 1/2)= [V2~J to the GRF result for the fused uncertainty, c%, guarantees that the fused result will not be more certain than cr . min  In case C the measurement uncertainty regions, S only intersect at their borders. From Equation h  (3.18) and the algorithm of Section 3.1.4.2, k= 1 for this case. This case represents the borderline between Heuristic Specification cases 3 and 4. That is, the measurements are the point where if further separated then at least one measurement will be in error. According to the Heuristic Specification the result should include the entire region encompassed by the measurement's uncertainty ellipsoids. A scaling factor of C(£ = l)=[2V2] yields the desired result. Case D, where k>\, is representative of all cases where the uncertainty ellipsoids of the measurements do not intersect. This is equivalent to the Heuristic Specification case 4. This case indicates that there is an error with one or more of the measurements. Without additional information it is not possible to determine which measurement is in error. Therefore, the resulting fused uncertainty region should include the entire area that is spanned by the measurements and their uncertainties. The scaling factor, C(k), yielding such a result is derived for the one dimensional case as follows:  separation  f,case D  k(cr +a ) x  if  <T| =  2  cr + a  +  a then o~  N  2  x  =-j=rcr,  = (k +1)0-,  V2  2  and  = {k + \)d2a  N  :. C(k) = -J2k + V2 for k > 1  The above result is applicable for case D where k > 1. 34  (3.22)  Combining cases A through C provides three points that the Uncertainty Scaling Function, C(k)= f (k), should pass through for 0 < k < 1. Requiring that the slope off(k) c  c  be 0 at k=0 and Jl at  k=\ (so that f (k) is continuous) provides two more constraints forf (k) on 0 < k < 1. Fitting a fourth order c  c  polynomial through these points subject to the constraints yields: /.(jfc) = C = l-l.l005* +8.1005* -5.1716A: , 0<>fc < 1 2  3  (3.23)  4  c  Unfortunately this curve has a slight dip below 1 around k=0. This is not acceptable since a result more certain than the GRF result is not desired. A slight modification of case B such that C = [1.5] rather than C = [2V2~J=[l.4l] yields a curve that is monotonic. Therefore, the Two Measurement Certainty Scaling Function shown in Figure 3.14 is given by:  (3.24)  0  0.5  1  Separation Parameter (K) Figure 3.14: Uncertainty Scaling Function for Two Measurements.  35  1.5  3.1.4.3.1 The General Uncertainty Scaling Function The previous section developed the Uncertainty Scaling Function, f (k), for the case of fusing two c  measurements.  Fusing higher number of measurements, m, in a one-dimensional space requires  developing a similar curve to that of equation (3.24), according to the following specifications: In the region of 0 < k < 1 the curve must pass through the points: 1)  / ( * = 0)=l,  (3.25)  2)  f (k = 0.5) = 4m~,  (3.26)  3)  f {k = \) = 24m~,  c  c  (3.27)  c  subject to the following constraints:  dk  = 0,  ]  2)  ;  (3.28)  AO  -  *  m.  dk k=\  .  (3.29)  These three check points ensure, given the separation of the measurements, that the resulting fused uncertainty does not provide an unrealistically optimistic result. Fitting a fourth order polynomial to the above constraints yields the following curve: f (k)=\ + {j4m-n)i -(7V^"-18)c  + (2yfm~-$)k , 0<k<\.  3  2  4  c  (3.30)  In the region of k > 1 the curve must be f (k) c  where m is the number of measurements.  = 4mk + 4m~,  (3.31)  Combining equations 4.28 and 4.29 yields the General  Uncertainty Scaling Function: f(k) = j  1+  [  ( V ^ - l l ) t - ( 7 V ^ - 1 8 ) : + (2V^-8): , 7  2  3  *Jmk + 4m,  4  0<£<1  ( 3  3  2  )  k >l  This function ensures a smooth degrading of the level of fused measurement's uncertainty. A plot of this curve for different values of m is given in Figure 3.15.  36  General Uncertainty Scaling Function  0.5 1 SeDaration Vector, k  1.5  Figure 3.15: The General Uncertainty Scaling Function.  This function has been developed using one-dimensional measurements but the Uncertainty Scaling Function,//^, is a useful heuristic for fusion in any number of dimensions. Application of this heuristic • to fusion of higher dimensional measurements is discussed in the following section.  3.1.4.4 The Uncertainty Scaling Matrix For the one-dimensional case the Uncertainty Scaling Matrix, C(k), is a scalar and the fused standard deviation is calculated via the following equation: a =C(k)a . f  (3.33)  N  Extending the above one-dimensional case to handle //-dimensional measurements involves considering the scalar quantities as //-dimensional vector quantities. In the case of //-dimensional fusion the GRF result requires scaling in all //-dimensions. The General Uncertainty Scaling Function,  f (k), c  derived for the one-dimensional case is applied to the //-dimensional case by composing a diagonal scaling matrix as follows: C(k)  =  '•'  {  l  ( ^ - K - ( ^ - ' K ( ^ " ~ K ' 0<*,< { 4mk + V///, k,>\ +  7  n  2  7  8  j  37  3 +  2  8  4  (3.34)  where each dimension of the G R F result is scaled according to the corresponding dimension, i, of k. Due to the non-linearity of the Uncertainty Scaling Function, f .(k), the Uncertainty Scaling Matrix, L  c(k), is not rotationally invariant. Therefore, the reference frame in which k is expressed determines the quantification of the Uncertainty Scaling Matrix, C(k), and, thus, the size of the fused ellipsoid. Therefore, it is necessary to determine the Uncertainty Scaling Matrix, C(k), in a consistent manner such that the result is independent of any particular reference frame. This is accomplished by applying the Uncertainty Scaling Function, f .(k), when k is expressed in the Scaling Reference Frame, 3k, defined t  by the procedure in Section 3.1.4.2. Therefore (3.32) becomes:  (3.35)  Then the Uncertainty Scaling Matrix can be applied to the GRF uncertainty ellipsoid result through the following similarity transform: «-i S = i ^* •* Cfe ^ w  where a w]  ^  •  w 2 S  ^ •  Cfe )- / Z ^ ,  w 2  is the H G R F ellipsoid matrix expressed in 3 . , R  fc  f  W  j  j  is the rotation matrix from frame 3j to  frame 3j, c{k ) is the Uncertainty Scaling Matrix expressed in 3 , and ,,cr k  (3.36)  A  k  k  H  2  is the G R F ellipsoid  matrix expressed in 3 2 W  The H G R F method, as developed above, is based upon the l a uncertainty ellipsoid.  A similar  derivation could be made based upon the a a uncertainty ellipsoid which results in a horizontal scaling of a of the Uncertainty Scaling Function, f iak). c  3.1.5 Numerical Example Consider the fusion of three two-dimensional redundant measurements. The measurement data is included in Table 3.2. X and Y are the sensed position of an object in a plane with respect to the world frame, 3 . The scalars, a W  ex  and a , are the sizes of the principal axes of the uncertainty ellipsoids with ey  respect to the ellipsoid axes frames, 3 i , 3 2 , and 3 3 - The rotation angle, 6, is required to align 3 i with : 3 . A graphical representation of the measurement ellipsoids is given in Figure 3.16. e  e  e  e  W  Measurement Number 1 2 3  X (mm)  Y (mm)  7 6 8  8 7 8  a  ex  (mm)  1 2 1  Table 3.2: Example Redundant Measurement Data.  38  a  ey  (mm)  1.5 1 1  0 (degrees)  30 80 0  10  HGRF Result Meas  E E,  Meas #3  c g '55  o CL  GRF Result  6  8  10  X Position (mm)  Figure 3.16: Redundant Fusion Numerical Example.  Using equations (3.13) and (3.14) the fused mean and GRF variance are calculated to be: -  x  =[6.9540 7.7809],  x  2 w —* N  0.3531  0.0213  0.0213  0.5401  (3.37) (3.38)  The GRF result is shown in Figure 3.16 as the dotted line. Note that the uncertainty ellipsoids of the measurements must be first expressed in 3 before applying equation (3.14). Next the Measurement W  Spacing Vectors, v., are calculated using equation (3.18) for each measurement. This results in the following: ^v, =[-0.0105 -0.1577],  (3.39)  v =[ 1.0066 0.4842],  (3.40)  v =[-1-0460 -0.2191].  (3.41)  w  w  2  3  Using the algorithm in Section 3.1.4.2 and the Measurement Spacing Vectors above, the Separation Vector is determined to be, , £ = [1.1170 0.2560].  (3.42)  This vector, k, is used to determine the Uncertainty Scaling Matrix according to equation (3.35) k  where m, the number of measurements, is 3. The Uncertainty Scaling Matrix, expressed in frame 3 , is k  then, 39  3.6667  0  0  1.1527  (3.43)  This scaling is applied to the GRF variance, equation (3.38), using equation (3.36). This results in the H G R F result, expressed in 3 , as follows: W  2 I V — /  4.2576  2.1007  2.1007  1.8348  (3.44)  This uncertainty ellipsoid is shown in Figure 3.16 as the solid line.  The H G R F result is more  representative of the level of uncertainty of these measurements given their disparity of position, than the result produced by the GRF method. Further results of this method are presented in Chapter 5.  The H G R F method, as developed in this section, suitably adjusts the GRF method's result based on the level of disparity between the measurements.  The measurement disparity is quantified using the  Separation Vector, k, as calculated from the Measurement Spacing Vectors, v . The heuristic based Uncertainty Scaling Function, f (k), c  smoothly degrades the uncertainty of the G R F result as the level of  measurement disparity increases, providing the H G R F result.  This method is limited to redundantly  fusing measurements whose uncertainty can be adequately represented as ellipsoids and is specifically designed for fusing a small number of measurements.  3.2 Complementary Data Fusion Complementary sensor measurements, in contrast to redundant measurements, acquire knowledge in spaces that do not overlap.  Complementary measurements are useful for perceiving features in a  workspace that would not be perceivable by using only one sensor [25].  A variety of application  motivated methods for fusing complementary sensor data have been proposed in the literature. Reference [25] provides a comprehensive survey in this area. However, no framework has been proposed that allows a general architecture based application of complementary sensor fusion to automation workcell implementations. The development of such a framework would be beneficial to industrial implementations and future research. The following section presents some geometric based mechanisms for fusing complementary sensor data, directed towards initiating a general architecture based treatment, of the subject. Three specific cases of fusing complementary measurements are discussed in this work: (i) combining sensor measurements of the same feature taken in different dimensions, (ii) modifying the uncertainty of a measurement based on another sensor's measurement, and (iii) combining two sensor measurements taken in the same dimensions but over different ranges of sensing. There are many other types of complementary data and the implementation of complementary fusion is unique for each sensory application. The following sections outline a framework in which the specific complementary fusion  40  functions of dimensional extension, certainty modification and range enhancement can be implemented. The following mechanisms are applied within the ELD Architecture, as described in Chapter 4.  3.2.1 Dimensional Extension The Dimensional Extension function combines multiple measurements, taken of the same feature in different dimensions, into a higher dimensional representation. An example of the application of this function occurs when the position of an object on a plane is measured by two laser range finders, each acting on different axes as shown in Figure 3.17. These are both one-dimensional measurements and their sensing axes are not necessarily orthogonal. The two-dimensional position of the object is determined by combining the measurements and their associated uncertainties through dimensional extension.  Combined measurements  3v  Figure 3.17: Dimensional Extension of two one-dimensional measurements  The following approach is based on [31] and is reformulated within the context of the Dimensional Extension function. Again this derivation is based upon an assumption that the uncertainty of the measurements and the combination of the measurements can be adequately described using Gaussian distributions. Consider the following function that maps measurements in one space to another space: i =/y>  (3-45)  where x is an w-dimensional column vector representing an n-dimensional measurement expressed in an orthogonal reference frame, and y is an w-dimensional column vector composed of a number measurements, that combined, span the space of x.  The basis vectors of the space of y are not  necessarily orthogonal. A Gaussian disturbance is added to y such that  41  where y is the true value and dy is the disturbance. If by is small enough then (3.46) is approximated by x = f(l)+j[y)Sy  ,  (3.47)  where j ( y ] is the Jacobian matrix of /(•) with respect to y given as  Ay =  df dy  (3.48)  The mean of measurement x is determined using equation (3.45) as follows:  £ bl=/(>:) > =£  (3.49)  and the variance is  v\x\ = £^(x - x | x - xj ~^ = Jq\J , r  (3.50)  where cr is the measurement uncertainty matrix or uncertainty ellipsoid matrix as defined in the previous section. These results allow the determination of the mean and uncertainty of the measurement in the space of x through the definition of /(•) and its Jacobian, ./(•). Consider again the example illustrated in Figure 3.17. For this problem /'(•) can be determined from geometry as, -X,  — (m - ml  = f  *2_  +d )  2  f -  2d  '  y  m  2  J  2  '  2  (3.51)  -j={in - in + d )i + »i, 2  2  2  V2  From (3.48) the Jacobian is,  J V  ffl,  m. d  f m, )  ffl,  d  -^(m -m +d )^+\ 2  2  x  ^(m -m +d Y  2  2  2  2  2  (3,52)  2  _ V2 V2 The mean of the laser range finder measurement can be expressed in the world frame, 3w, by using equation (3.51) while the uncertainty ellipsoid can be determined using the following equation: a  -JCT,J . T  (3:53)  where  2,= and cr  HI|  and a  mi  0  at  are the standard deviations of each of the laser range finder measurements. 42  (3.54)  3.2.2 Uncertainty Modification Consider the situation where a two-dimensional position sensor is based on processing camera image data. The performance o f such a sensor is very dependent on light levels. If the workspace is too dark or light then predefined thresholds cease to be valid and the measurement's uncertainty increases.  I f an  illuminometer, measuring the light level o f the workspace, is available, the uncertainty o f the position measurement can be adjusted so that it is more appropriate for the present operating conditions. A relationship between a sensor measurement o f an operating parameter and the uncertainty o f another measurement can be composed based on expert knowledge. A n example o f such a relationship for the previous example is given below in Figure 3.18.  This relationship can be generated using analytical,  simulated and/or experimental results. The horizontal axis is the value o f the sensed operating parameter. The vertical axis is the Uncertainty Adjustment Factor, a, which is the factor that the uncertainty o f the sensor measurement should be scaled by due to the present operating conditions. Similarly, Mahajan et al. [26] have proposed a fuzzy logic based method that modifies the strength o f a sensor's contribution using expert knowledge and complementary sensor measurements.  This method was discussed in  Chapter 2.  Light level, L Figure 3.18: Certainty modification based on expert knowledge. The Uncertainty Adjustment Factor, a, is applied to the uncertainty ellipsoid matrix using the following equation: 2  =AaA  r  43  =AaA,  (3.55)  where a . A(/  is the adjusted uncertainty ellipsoid for the measurement,  a is the uncertainty ellipsoid of  the measurement, and A is a diagonal matrix formed from the Uncertainty Adjustment Factor, a, as follows: 4a  0  •••  A=  (3.56)  , A G R'  1  0  ••• 4a  where n is the dimensionality of the measurement. Equation (3.55) must be applied in the principle axes reference frame of the uncertainty ellipsoid of the measurement, a , so that the scaling is uniform and no rotation is applied to the uncertainty ellipsoid, a is diagonal in this frame and therefore (3.55) can be simplified to  0  0 (3.57)  3.2.3 Range Enhancement As an example, one can consider an application where an overhead image is required to track an object's motion throughout the workspace. The required image resolution and size of the workspace make it impossible for a single camera to perform adequately. Multiple cameras can be used where the fused images cover the entire workspace, as shown in Figure 3.19. This is an example of complementary fusion for the purpose of sensory range enhancement. In order for range enhancing complementary data to be fused, the measurements must be taken in common dimensions and the individual sensing ranges must not overlap. If there is a common region, then the data in this region must be fused as redundant data, using the H G R F method presented previously. Furthermore, the complementary data must be expressed in the same format. Again, considering the multiple camera example, i f the cameras view the world at different scales then a direct combination of the images is not possible. First, the scales of the image must be matched and then complementary fusion can be performed.  44  Figure 3.19: Camera image range enhancement set-up.  As a further example, consider a laser range finder and a linear potentiometer that sense the position of a trolley that moves along a short 200 mm linear track. Both of these measurements are one-dimensional and are valid over different ranges. The laser range finder has a range of 50 mm and senses the distance from 150 to 200 mm. The linear potentiometer has a range of 175 mm and senses the position from 10 to 150 mm. This is shown in Figure 3.20.  T r o l l e y R a n g e o f M o t i o n (0 to 2 0 0 m m )  Potentiometer S e n s i n g R a n g e (0 to 1 7 5 m m )  Figure 3.20: Range Enhancing Complementary Fusion Example.  If the laser range finder does not report a valid measurement (i.e. there is no object surface within the 50 mm sensing range) then the potentiometer measurement is used. If the potentiometer measurement is 45  out of range then the laser range finder measurement is used. If both measurements are in range then the data is redundant and the proposed redundant fusion method is used.  3.3  Projection  This transformation is the projection of an «-dimensional ellipsoid into an (w-l)-dimensional ellipsoid. Additional projections can be used to further reduce the dimensionality of the data. For example, one can consider a three-dimensional measurement of the end-effector position of a three degree-of-freedom Cartesian manipulator. This measurement is used as the feedback signal for the one-dimensional axes controllers. However, the controllers only require the dimension that pertains to them and therefore the data must be projected onto the proper axis. Defining the projection as the total range of the ellipsoid along the dimension of interest is a conservative definition. There is a loss of information when performing this operation. This is most apparent when there is a strong correlation between two dimensions, i.e., the ellipsoid's principal axes are 45 degrees from the axes of interest. For example, the two-dimensional ellipsoid is projected onto the axes shown in Figure 3.21 and then a subsequent extension operation is performed to reconstruct the twodimensional ellipsoid. It can be seen that the reconstructed ellipsoid is larger than the original and therefore some knowledge has been lost in the projection operation. x  2  Mi  Figure 3.21: Projection and reconstruction of a two-dimensional ellipsoid.  A search for a suitable mathematical method to accomplish the projection function was unsuccessful and therefore the following method is included for completeness. Projecting the ellipsoid into a lower dimensional space requires finding the maximum and minimum values of the ellipsoid along the dimensions of that space. Determining the maximum and minimum 46  values of an ellipsoid in a dimension is accomplished by taking the partial derivatives of the ellipsoid equation and solving for the minima as follows. The equation of an w-dimensional ellipsoid is given as: <P\,\ ••• <Pu  1, JPn.\  (3.58)  <Pn,,_  where the square matrix is the inverse of the covariance matrix, a. Note that the origin of the ellipsoid is at the origin of the reference frame that this equation is expressed in. Multiplying the matrices in (3.58) yields the following equation for a rotated «-dimensional ellipsoid: ~'  +  + x <p , )+x (x ip n  n l  2  l  +--- + x <p, )+--- + x ( (p  ia  n  K2  n  X]  ln  + ... x „ ^ „ „ ) = l . +  (3.59)  Taking the partial derivative of (3.59) with respect to x , —— , gives the following equation: dx k  k  \<Pk,i + 2<Pk,2 +••• + (*.0>u + 2<P2, +••• + <P ,  x  x  2x  x  k  k  Noticing that  k k  +••• + x„<p, )+••• Kk  + x (p n  nk  = 0 . (3.60)  . = (Pj, dividing by a factor of 2 and putting (3.60) into matrix form yields:  ;  <Pk,  <pk,k = 0.  i>  (3.61)  <Pk.n  This is the equation for the partial derivative,  dx.  of an «-dimensional ellipsoid. To determine the  range of the ellipsoid in dimension /, one can take the partial derivative with respect to every dimension except /. This results in a system of n-l equations of the form of (3.61) as given below: <P i.i  <Pi-\,\ <PM,\  [x,  = [0 -  0].  (3.62)  Solving this system of equations for x, results in n- \ equations of the form X ; = (3,-X, + bj  where a-, and 6, are constants.  (3.63)  Substituting these equations into (3.59) yields the solution for X/ and  produces the minimum and maximum values of the ellipsoid on dimension /. Substituting the minimum and maximum values into (3.63) results in the coordinates of the minimum and maximum points of the ellipsoid along dimension /. The above derivation needs to be performed for each dimension of ellipsoid for which a solution is desired.  47  For a two-dimensional ellipsoid the equations for the minimum and maximum values of both dimensions are: -2 \ max(x, ) , m i n ( x , ) = x i ± <P\\  (3.64)  <Pl2  max(x ), min(x ) = x ± <Pn 2  where  2  ( x i , X 2 ) is the center of the ellipsoid.  (3.65)  2  The following are the similar equations for a three-  dimensional ellipsoid: max(x,), min(x,) = xi ±3 Jcp ^ -<p^(p >  (3.66)  max(x ), min(x ) = x ± B^cp^ -g> (p ,  (3.67)  max(x ),min(x ) = X3 ±fijcp  (3.68)  2  22  2  2  2  n  -(p <P ,  2  3  3  u  2  22  n  where  B=  2cp cp cp +cp <Pfj (Pn<Pu - <Pu<P <Pii <Pv<Pn +  n  2J  u  +  22  (3.69)  22  In order to project an ^-dimensional ellipsoid into («-l)-dimensional space the coordinates of the maximum (or minimum) points in each dimension except the one being collapsed are required. The following expressions can be used to determine the con-esponding coordinates of the maximum and minimum points of a three-dimensional ellipsoid: l  -XI 2 L.V]. =max(.V| . ),min(.T|)  (3.70)  ±  =  3 I.Vi=max(.V| ) , m i n ( . r j )  = X  <Pll<Pl2 -<P (Pn)  ±  3  22  V^2 3 - <p <p 2  x =max(.v  2  2  ),min(x )  = Xl ±  33  P{<Pn<Px  (3.72)  <Pis<P\\  2  -<P\\<Pn)  = X3 ±  3 I .r =max(.r, ),min(.v ) 2  (3.71)  22  (3.73)  2  ' I .V3=max(.t3 ),min(.t3)  = x, +  0{<P <P -<P22<Pn) l2  2J  (3.74)  <P\\<P22 = X2  2 I ,v =max(jr ),min(jr ) 3  3  -<P\\<P2i)  4 9>\2 -9\\<Pi2  3  48  (3.75)  Once the coordinates o f the maximum (or minimum) points are determined an (w-l)-dimensional ellipsoid is fit to these points by using equation (3.58) together with an expression for the maximum Or minimum value along one o f the projected dimensions such as one o f equations (3.63) to (3.68). A threedimensional ellipsoid projected into a two-dimensional ellipsoid is shown below in Figure 3.22 and a numerical example follows.  Original Measurement  Projected Measurement  X|  xi,,  X|,i  Figure 3.22: The projection of a three-dimensional ellipsoid into a two-dimensional ellipsoid.  Consider a three-dimensional ellipsoid, centered at the origin, with the following uncertainty ellipsoid matrix: 2  0  -0.7071  0  2  0.7071  •0.7071  0.7071  2  (3.76)  The inverse o f this matrix is taken resulting in the ellipsoid equation in the form o f (3.58) as follows:  X\  X} 1  Xi  0.5833  -0.0833  0.2357  -0.0833  0.5833  -0.2357  0.2357  -0.2357  0.6667  = 1.  (3.77)  Projecting the ellipsoid onto the JC/-*.? plane requires the maximum points in the x\ and X2 directions. From equations (3.66), (3.67), (3.70) and (3.72) the points are: maxO,) = 1.4143,  (3.78)  m a x ( x ) = 1.4143.  (3.79)  2  49  ,,=U143 ~  2  X  1.r =1.4143  1  0  (3.80)  '  = 0.  (3.81)  2  Substituting these points in the general equation o f a two-dimensional ellipsoid, as is given in equation (3.58), provides two equations.  Using equation (3.64) provides a third equation that allows the  parameters o f the two-dimensional ellipsoid equation to be solved as follows: I1 1 Ix,  |0.50 0 2 1 x 0 0 .50  1 2  (3-82)  1.  The corresponding uncertainty ellipsoid matrix is given as: "2 0~ w 2D = a  0  (3.83) •  2  This particular example results in a circular ellipsoid, however this is not generally the case.  3.4 The Ellipsoid Representation and Non-linear Transformations Consider the situation where a laser range finder is used to measure the position o f an object, as is depicted in Figure 3.23. The measurement consists o f the distance, r, and the angle o f the laser from a known reference, 6.  Figure 3.23: Laser Range Finder Measurement.  A Gaussian distribution is assumed to describe the uncertainty o f both o f these variables. Therefore, in the (r, 9) space an ellipsoid accurately describes the measurement uncertainty.  However, the  information is needed in (x,y) Cartesian space and therefore the following transformation is applied:  50  (3.84)  r = ^x + y 2  e = cos'  2  (3.85)  v^7  This transformation is non-linear and therefore the shape of the two-dimensional Gaussian surface is not maintained in the (x,y) space. The equation of the two-dimensional Gaussian expressed in (x,y) space is given below and the distribution is shown in Figure 3.24. Cos' 2a{,  2a'  -e  (3.86)  v  2.7ZO <y r  0  15  15  Figure 3.24: Measurement Uncertainty in (x,y) Space with Large Uncertainties ( a = l , a =7t/4, r r  e  mea  „=10,  Oinean"~0)-  Clearly, an ellipsoid representation of the uncertainty of this measurement in (x,y) Cartesian space i not valid for large uncertainties. An ellipsoid representation can be used as an approximation, howevei when the measurement uncertainty is small as is the case in Figure 3.25.  51  15 1 5 Figure 3.25: Measurement Uncertainty in (x,y) space with Small Uncertainties (a -I, r  a =n/l2, r e  mean  =10,  Therefore, caution must be exercised when applying non-linear transformations to data represented using uncertainty ellipsoids.  3.5 Summary In this chapter a redundant fusion method, referred to as the HGRF method, has been presented that provides reliable results for all measurement cases. The method is based upon G R F method [31] and assumes that the ellipsoid representation adequately describes the uncertainty of the measurements. Further, three specific cases of complementary data fusion have been presented in this chapter.  A  dimensional extension function has been presented that is useful for combining measurements taken of the same feature in non-overlapping measurement spaces. An approach has also been presented where a sensor's measurement can be used to modify the uncertainty of another's measurement. Additionally, a method for combining different measurements taken in non-overlapping ranges in an overlapping measurement space has been presented.  Finally, a method for projecting an ellipsoid into a lower  dimension has been developed. A l l the methods presented in this chapter are applicable to the fusion of measurement data taken in spaces  where  the  ellipsoid  representation  suitably specifies the  uncertainty  of the  data.  Caution should be exercised when applying these methods in non-Cartesian measurement spaces and when using non-linear transformations between spaces. In such cases the ellipsoid representation is an approximation that may or may not be acceptable to the application.  52  Chapter 4  4 The Encapsulated Logical Device Architecture The Encapsulated Logical Device (ELD) Architecture is a framework developed to improve the design and enhance the operational reliability of industrial automation workcells. The ELD is based upon the LS concept and includes additional functionality making it a versatile device suitable for industrial automation workcell applications. The ELD Architecture provides a suitable framework in which the fusion mechanisms developed in Chapter 3 can be implemented. While this thesis focuses on industrial automation workcells, this work could be extended and applied to other systems such as autonomous robotics and distributed systems. This chapter describes the functionality of the ELD object and then details the ELD Architecture. A short discussion explaining the role of developers and users of this technology is included at the end of this chapter. Examples of implemented ELDs and an ELD Architecture are included in Chapter 6.  4.1  The Encapsulated Logical Device  This section begins with a specification for the design of the general ELD object. This encompasses design principles, functions and data that are common to all ELDs. Following this, the details of the components of the ELD object design are discussed.  4.1.1  Specifications  Based on the research discussed in Chapter 2 the following specifications are proposed for the design of the ELD object:  53  1) Each ELD has a common framework. This is a fundamental property of the LS resulting from OOD [37][11]. A common framework allows designers to easily implement and modify all ELDs. A l l common features of the E L D object should be derived from a common class. This specification should also be maintained in cross platform design, where possible.  This would require developing an E L D base class in every platform  supported. 2) Each ELD possesses all required knowledge. There should be no centrally stored information used by multiple ELDs. Again this is founded in the LS design [11]. Each E L D should possess all information that it requires to perform its tasks. Some information, such as sensor input, can be received through communication channels. 3) Each ELD only has knowledge relevant to the scope of that ELD. A l l knowledge that is contained within an E L D should be relevant to the functions of that E L D . A n E L D should have no knowledge of another E L D ' s functions or data.  ELDs are unaware of the  internal workings of all other ELDs. This is again included in the LS design [11]. 4) Each ELD is capable of being a stand-alone Sensor-Actuator Feedback Control Loop. Each E L D should have sensor input and the ability to process this input. Further, each E L D should be able to supply a controlling signal to an actuator. Budenski and Gini in [8] proposed this concept. Actuators are not required in every E L D , however a sensor is required. Further, the E L D should possess all high-level intelligence required for independently reliable operation. 5) All data is represented as uncertain. A l l sensory and processed data within the E L D should be represented as uncertain. Also, all data processing within the E L D should consider the level of certainty of the data used.  This is done to  increase the operational reliability of the automation workcell. The importance of this concept was . stated in [13]. Many approaches to uncertainty representations have been proposed as was discussed previously in Chapter 2. 6) Each ELD can have many inputs and many outputs. Again this is based on the original LS specification [11]. 7) ELDs are only able to communicate with ELDs that are directly connected. ELDs, as with LSs [11], should not be able to send messages or data to another E L D that is not directly connected to it. ELDs should not even be aware of the existence of unconnected ELDs. However, communication between indirectly connected ELDs can occur through a chain of connected ELDs. For example, a high level E L D could request data from its input ELDs, which in turn request data from their input ELDs. The data is thus passed through the chain.  54  The following specifications have not been explicitly stated in other architecture specifications in the literature, however, the concepts are not novel. These specifications are specific to the implementation of the E L D object.  8) Communication between ELDs can be commands and responses. ELDs are able to send commands to ELDs that provide them input. In reply, the E L D that provides the input sends a response at the request of the commanding E L D . This is the normal direction of communication between ELDs. 9) Communication between ELDs can also be requests and replies. Requests and replies, as they are called, are in the opposite direction as commands and responses. A n E L D can issue a request to an E L D to which it supplies output.  A reply is in turn sent to the  requestor. This direction of communication is required for specific implementation circumstances as is discussed further in the Communication section. 10) ELDs can be implemented on different platforms. The E L D design methodology should not be platform specific.  However, support of different.  hardware and software platforms requires developing E L D object programming code for that platform.  Reliable communication channels should exist on and between each platform for  connecting ELDs. 11) Each ELD is an independent process. Each E L D should be implemented as a separately executing entity. This would allow each E L D to be executed on separate processors, i f available. Thus, the possibility of parallel processing would exist, enhancing the speed of operation. However, ELDs should be implemented in a fashion allowing a single processor system to be used as well. 12) A "drag and drop" Graphical User Interface (GUI) design can be used to implement core ELD functionality. To allow for rapid development of an E L D Architecture application, using a GUI tool, a uniform and sufficiently general design should be adopted for the E L D . This is facilitated by uniform data and functions and systematic naming conventions.  4.1.2 ELD Components The E L D has been implemented according to the specifications listed in the above section. The O O D approach was utilized, which ensures that specifications 1 through 3 are met. implemented has a modular and self-contained common framework.  Therefore, each E L D  Figure 4.1 is a diagram of the  general E L D object. The three main components of an E L D are discussed below: the sensor, manager and actuator.  55  Responses and Requests  Commands and Replies  Encapsulated Logical Device Manager  "Detailed Design Provided.  Knowledge Base*  ** Framework Provided.  Communications*  Planner  Actuator**  Sensor*  Controller*  Processing*  Output Driver**  Fusion* Input Driver*  Control Signal  Sensor Input  Requests  Commands  Figure 4.1: The Encapsulated Logical Device (ELD).  4.1.3 The Sensor The sensor component of the E L D contains the E L D ' s sensing abilities and data. This component fulfils half of specification 4. Included in the sensor component are the data input driver, fusion and processing functions, which are discussed below. The sensor functions and data are defined in the E L D base classes as can be viewed in Appendix A .  4.1.3.1 Sensor Input Driver Sensor input can come directly from physical sensors, such as an encoder. If this is the case then the Sensor Input Driver directly interfaces with the sensor hardware and must be implemented individually for each E L D . A library of Sensor Drivers could be developed to allow efficient implementation when hardware common to other applications is used. ELDs with physical sensor inputs are at the lowest level in the architecture and can be thought of as highly functional drivers. Alternatively, sensor input can also be received from other ELDs. In this case the Sensor Input Driver interacts with a "virtual" or Logical Sensor and receives data through the manager's communication channels, which are discussed later. The Sensor Input Driver decodes received sensor input and maps it  56  to an internal data structure. In this case, the Input Driver can be configured automatically for each E L D , according to specification 12. Further, in agreement with specification 6, the Sensor Input Driver is able to connect to multiple physical and Logical Sensors.  4.1.3.2 Fusion Module According to specification 5, all data within the E L D Architecture is represented as uncertain. Therefore, a mechanism must be included in the E L D that enables the systematic combination of uncertain data. The uncertainty ellipsoid representation, as discussed in Chapter 3, is adopted as the uncertainty representation used in the E L D . Further, the fusion mechanisms presented in Chapter 3 are utilized within the E L D ' s Fusion Module. These fusion mechanisms are only applicable to specific cases of data fusion. Fusion cases encountered in the implementation of a specific E L D that cannot be handled using these mechanisms must be handled individually. Each E L D ' s Fusion Module must be configured individually according to the sensor data that is received by that E L D .  The Sensor Fusion Module of each E L D is internally a hierarchy of fusion  mechanisms. An example sensor fusion module is given in Figure 4.2. The fusion mechanisms available include multi-dimensional redundant fusion using the H G R F method, dimensional extension, range enhancement, uncertainty modification and projection, as detailed in Chapter 3. The Fusion Module, using these fusion mechanisms, can be configured using a GUI tool based on specification 12. Once the sensor input is received, the data is fused in the fusion module and is then passed to the processing module.  57  X  Y  Projection  /  Projection  J M, 1. ft  Redundant J  Uncertainty Modification  L  Extension  o  Figure 4.2: Example ELD Sensor Fusion Module.  4.1.3.3 Processing Module The Processing Module processes the fused data into sensor output, according to specification 4. This is the module in which the customized ELD sensor functions are defined, making each ELD's sensor component unique. Since these processing algorithms are defined specifically for each ELD, the base ELD class provides an empty function that must be developed by the ELD designer. The uncertainty of the data used must be considered in the algorithms developed. Additionally, the sensor output produced must have an associated uncertainty quantification, according to specification 5. The specific function used to determine this uncertainty depends on the processing algorithm used. In general, the output uncertainty is a function of the input and processing uncertainty: ° out =/(o" ^m)-  (4- ) 1  P  where a  ml  is the uncertainty of the output, <J is the uncertainty of the process, and cr is the p  jlt  uncertainty of the input. This function may be determined through analytical and experimental methods on a case-by-case basis. The output of the Processing Module is available to the ELD's actuator component and to other ELDs that are connected as outputs.  58  4.1.4 The Manager The Manager is responsible for all high-level functions of the E L D including communicating with other ELDs, interpreting messages received from other ELDs, planning and maintaining a knowledge base. Detailed designs for the Interpreter and the Communications modules are included in this work. Only a framework has been provided for the Planner and Knowledge Base modules of the manager. A l l manager functions and data are included in the E L D base classes, as shown in Appendix A .  4.1.4.1 Communication The Communication module has the ability to send commands to input ELDs and requests to output ELDs according to specifications 8, and 9. Additionally, according to specification 7, this module does not allow the E L D to communicate with ELDs that it is not connected to. The Communication module is common to all ELDs and is fully implemented in the base E L D class. The Communication module packages messages and accesses inter-ELD communication channels. The mechanism used for inter-ELD communication depends on the platforms the system is implemented on.  In this implementation, based on a Microsoft N T system, "Pipes" are used as a reliable  communication channel. They are accessible from any N T application both locally and across a network. However, i f real-time performance is required a different platform such as the Open Real-Time operating System (ORTS) [4] executing on a Digital Signal Processor (DSP) would be required. N T pipes have also been implemented to provide communication between ORTS and N T based applications. The format of inter-ELD messages is as follows: Parameter  Description  Sender  Sending E L D N a m e  Recipient  Receiving E L D Name  Message  C o m m a n d , R e s p o n s e , Request o r R e p l y  Handshaking  Handshaking Type  Parameter 1 Parameter 2  Data Value Data Value • • •  Parameter (n-1)  Data Value  Parameter n  Data Value  Data  M e m o r y Address  The first two parameters indicate where the message originated and for whom it is intended.  The  unique ELDName tag of each E L D is used for these parameters. The third parameter, Message, indicates the subject of the communication. A message can be a command, response, request or a reply. Examples of some commands are 'Send Output', 'Change Set Point' and 'Calibrate'. A list of possible commands is included in the E L D base class data. This list can be appended, as applications require. In keeping with the specifications, commands sent can have no knowledge of the internal workings of a connected E L D .  59  For example, a command such as 'Increase  Image Resolution' would not be valid while 'Increase Sensor Certainty' would be. Commands are the most common message sent by the communication module, however, requests are also possible. Requests are messages sent to output ELDs making a special request for information or data. A list of requests is maintained in the ELD base class data. This list is specific to the application implemented, as there are no requests common to all ELDs. The ability to send requests is required for a low-level ELD to request action or information from a higher-level ELD. One can consider the situation where a camera ELD has been commanded to calibrate but a manipulator must move out of view first. The commanding ELD has no knowledge of the calibration procedure and cannot know if the manipulator must move. Therefore, the camera ELD must first request the manipulator to move and then perform the calibration. The communication module also sends responses and replies to commanding or requesting ELDs. These messages include requested information. The reply that is sent depends on the handshaking mode indicated in the command, which is the fourth parameter. The handshaking modes possible are no reply, reply when the message is received and reply when the task is complete. The application developer can add additional modes and the mode used for each message is the designer's choice. If handshaking is desired, the executing thread of the commanding or requesting ELD is placed in a waiting mode. The wait ends when the appropriate response is received, or when a predefined amount of time has elapsed, indicating failure. The next n parameters are available for data passing. These are defined specifically for each message. The final parameter, Data Address, is useful for passing data structures of undefined type. This is the parameter that is used to pass sensor output data structures in reply to a 'Send Output' command. 4.1.4.2 Interpreter, Planner and Knowledge Base The Interpreter monitors the communication channels that are connected to the ELD for commands and requests received. The messages received are decoded, checked for validity and routed to the Planner. The base ELD design provides for interpretation of messages that are common to all ELDs., Messages specific to an ELD must be manually added to the Interpreter on a case-by-case basis. The Planner determines what action the ELD should take upon the receipt of commands and requests. The Planner, together with the Knowledge Base, provides the independent operation required of specification 4. The detail of this component is out of the scope of this thesis and only a framework has been provided in this ELD implementation. Presently, the Planner tasks are hardcoded in the Interpreter component. Bayesian and Dempster-Shafter approaches to uncertain reasoning as seen in the SFX Architecture [28], and approaches such as Budenske and Gini's in [8] could be used for this component, where a high level goal, received as a message, is decomposed into a detailed plan. The plan could include sensing and actuation tasks as well as the allocation of the ELD's physical resources. This module interacts with the Knowledge Base to provide high-level intelligent behaviour in the ELD.  60  The Knowledge Base is a depository for data and knowledge, describing the environment relevant to the scope of the E L D . Again, only a framework for this component is provided and the detailed design of this component is out of the scope of this thesis. The Knowledge Base could be hard coded or gathered through sensing and reasoning. This component can be described as a world model. This includes issues such as how to acquire pertinent information and manage acquired uncertain knowledge as it is gathered. Work in the area of decision support and diagnostic expert systems is of interest for the design of this component [38].  4.1.5 The Actuator The Actuator component of the E L D is responsible for controlling the physically connected actuators, as required by specification 4. A framework for this component is included in this work but the details of its design is not addressed. ELDs that have a direct connection to an actuator form a feedback control loop.  The actuator  component contains an Output Driver and a Controller. Not every E L D has an implemented Actuator component but only those directly connected to a physical actuator. The actuator framework is included in the E L D base classes given in Appendix A . The Output Driver interfaces the E L D with the physical actuator. This involves communicating with output hardware such as an I/O card, DSP or parallel port. Implementation of this function depends on the hardware used and thus must be individually designed for each E L D . A library of Output Drivers could be developed to allow efficient implementation if hardware common to other applications is used. The implementation of the Controller is again accomplished on an individual E L D basis and has not been detailed as a part of this work. The Controller includes algorithms for low-level control, high level tuning and controller selection. The uncertainty quantification of sensor data and information from the Planner and Knowledge Base can be used in the control process. A library of control algorithms, for use with a GUI E L D design tool, could be developed to increase the efficiency of the design of ELDs.  4.1.6 The Executing Thread Each E L D is implemented as a separate thread that runs in parallel to all other E L D threads, in accordance to specification 11. The primary E L D execution loop continually monitors all communication channels associated with that E L D . When a message is received, it is interpreted and the planner takes the appropriate action. This could involve a series of sensor and actuator tasks.  Once the plan is  completed the executing thread returns to monitoring the communication channels until the next message is received.  61  4.2 The ELD Architecture This section begins with the specification for the design of the E L D Architecture. This governs E L D connections and the run-time GUT. A discussion about the components of the E L D Architecture follows.  4.2.1  Specifications  The following specifications govern the design of the E L D Architecture:  13) ELDs are connected hierarchically. The E L D Architecture consists of a hierarchical connection of ELDs, each designed for a specific purpose. Outputs of an E L D must be connected to a higher level E L D . Similarly, inputs of an E L D must be connected to a lower level E L D . There are no restrictions on how many levels may be spanned by connections. This is similar to the L S specification [11].  14) The abstraction of the ELD's data increases with the level of the hierarchy. The lowest level of the hierarchy contains ELDs directly connected to sensors and actuators. These low-level ELDs are directly associated with physical sensor data.  As the level of the hierarchy  increases, the abstraction of the E L D ' s output data increases. This is a concept proposed for LSs in [11]. For example, a low-level E L D may output pixels, a medium-level E L D may output edges and a higher-level E L D may output an object description. The highest-level in the architecture contains an E L D that is responsible for system planning and the overall supervision of the system.  15) There can be no circularly connected ELDs. ELDs cannot be connected in a circular fashion where the output of an E L D is eventually fed into its own input. This creates a loop and must be avoided.  The following specifications have not been explicitly stated in other architecture specifications in the literature, however, the concepts are not novel. These specifications are specific to the implementation of the E L D Architecture.  16) A run-time GUI is required. The E L D Architecture must supply a run-time GUI that is modifiable by E L D Architecture application developers. This is required so that developers can easily customize user interfaces for their applications. The run-time GUI enables users to command the automation system and receive information about the performance and state of the system. The E L D Architecture must also supply an interface between the ELDs and the run-time GUI. A n easy to use mechanism for relaying data and commands between the architecture and the G U I is required.  62  17) Implementation of ELD Architecture functionality should allow for "drag and drop " Graphical User Interface (GUI) design. To allow for rapid development of an E L D Architecture application, using a GUI tool, a uniform and sufficiently general design is required for the connections and GUI.  4.2.2  ELD Architecture Components  The Architecture class contains the E L D objects developed for an application, and specifies the connections between those ELDs, according to specifications 15. Adding ELDs and connections between them is accomplished by the application developer. It is (heir responsibility in designing their application to adhere to specifications 13 and 14. Further, in keeping with specification 17, these additions can be made using a GUI tool described below. The architecture has been implemented using OOD. The E L D Architecture also includes a run-time GUI and establishes a communication channel between the ELDs and the run-time GUI, as is required by specification 16. Base functionality of the GUI is provided but the application specific details need to be added to the GUI on a case-by-case basis. The developed GUI would display any desired system information and allow issuing predefined commands to the ELDs. This makes system information available to the user and allows a user to control the operation of the system.  4.3  C r o s s - P l a t f o r m Implementation  The E L D Architecture classes must be developed by the E L D Architecture provider for each platform supported. This is a one-time development requirement. Additionally, reliable communication channels must be established within and between each of these platforms.  Ideally, a sophisticated G U I  Architecture Builder Tool would enable seamless cross-platform implementation. automatically generate the code appropriate for the E L D ' s platform.  The tool would  Additionally, the tool would  automatically setup the communication channels between ELDs, no matter what platform they reside on. O f course, the tool would have to be designed to support all desired platforms.  Presently, the E L D  Architecture is implemented on the Microsoft N T 4.0 platform and communication to the ORTS platform is also supported.  4.4 ELD Architecture Builder A GUI based E L D Architecture Builder Tool has been developed to support this work [19]. This is a tool that rapidly generates Visual C++ code in the Microsoft Visual Studio programming environment. This is a first implementation of the tool and many additional features could be added. This present tool is able to generate an E L D Architecture application, complete with ELDs with core functionality, connections between the ELDs and a basic run-time GUI. Specific functionality of the ELDs and GUI must be added manually using Microsoft Visual Studio. The automatic generation of C++  63  code begins with E L D and E L D Architecture shell files. Shell files are C++ files containing the outlines of classes that must be coded to create an E L D Architecture application. The Architecture Builder Tool completes the detail of the shell files according to the choices that the user makes with the GUI. Additionally, the Architecture Builder Tool automatically configures the Microsoft Visual Studio project and workspace to enable manual additions to the classes. A library system has been implemented to enable the reuse of old applications. The library contains shell files of ELDs that were designed for specific tasks. applications they can be added to the library as shell files.  As new ELDs are created for specific Those shell files are available in the  Architecture Builder Tool as a base for development of new ELDs.  This increases the efficiency of  implementation by allowing the reuse of code. Further development of such a tool would be beneficial. The most important areas of development are the creation of additional E L D shell files and adding support for additional platforms.  4 . 5 The Structure of the E L D Architecture Product Development There are three groups involved in the development and application of this technology.  These  included the E L D Architecture Provider, Application Developers and Users. The ELD Architecture Provider develops the E L D specification and base E L D classes. Currently the Industrial Automation Laboratory is the E L D Architecture Provider, however, with appropriate licensing an industrial partner would fulfill this function.  Further advances in the capabilities and platforms  supported can be made in the specification at this level.  Additionally, tools used by application  developers such as G U I based rapid implementation tools, diagnostic tools and libraries of pre-designed ELDs are developed at this level. ELD Architecture Application Developers are industrial engineering teams that develop automation, systems. These would be companies that use the E L D specification as a development base for their automation systems. These parties would have access to all E L D code specific to their application but would not be able to modify the E L D base classes provided by the E L D Architecture provider. A l l the development tools provided would be available for use at this level. The ELD Architecture User is the end user of the product. This group includes parties that purchase automation systems developed by the application developers.  A n E L D that has been defined for a  specific task appears as a black box within the architecture at this level. The application developer can package the E L D objects so that the internal workings of the E L D are not modifiable by the end user. The User is able to monitor the performance of the ELDs through diagnostic tools provided. This allows access to all automation process performance data required. The E L D Application Developer and User could separate or independent organizations.  64  4.6 Summary The E L D Architecture allows systematic and efficient implementation of automation workcells across multiple hardware and software platforms. The E L D Architecture is based upon the LS and includes sensor and actuator functionality.  The quantification of the uncertainty of all sensor data and the  utilization of the sensor fusion mechanisms detailed in Chapter 3, will enhance the operational reliability of industrial automation workcells. In this chapter, specifications for the E L D and E L D Architecture, based on the literature discussed in Chapter 2, have been presented. Further, the framework of the E L D and E L D Architecture, according to these specifications, has been presented.  The details of some implemented components have been  included as well. Further, a GUI based Architecture Builder tool has been briefly described that enables rapid development of E L D Architecture applications.  65  Chapter 5 5 Redundant Fusion Simulation Results Simulation results of the HGRF algorithm developed in Chapter 3 are presented herein. This discussion of results further demonstrates the effectiveness of the HGRF method in handling varying levels of sensor measurement disparity. First, the results and discussion of the fusion of two one-dimensional measurements are presented. Following this, examples of the fusion of two and three two-dimensional redundant measurements are discussed. A summary of the simulation results concludes this chapter.  5.1 Fusion of Two One-Dimensional Measurements This section presents two series of results of the redundant fusion of two one-dimensional measurements. Thefirstseries consists of two measurements of equal variance with different means. The second series involves two measurements with equally separated means and differing variances. The redundant fusion algorithm used to generate these results is included in Appendix B. All measurements in this section are displayed as a Gaussian distribution rather than the ellipsoid depiction, which is a line with error barsforthe one-dimensional case.  •'.'  The first series is shown in Figure 5.1 and the second series is shown in Figure 5.2. The two measurements are shown with dashed lines and the HGRF result is shown with a solid line. The GRF result is included as a dotted line for comparison. The corresponding data is included in Table 5.1.  65^  (e)  (f)  Figure 5.1: Fusion of Two One-Dimensional Measurements - Series One. Data for Figures 1(a) through (0 is given in Table 5.1. Measurements are shown as dashed lines, HGRF result as a solid line and GRF result as a dotted line.  66  0  2  4  6  8  10  12  14  16  18  0  2  4  6  8  10  12  14  16  18  (b)  (a) 0.7 •  07  0.6  0.6  0.5-  0.5  .  .  Figure 5.2: Fusion of Two One-Dimensional Measurements - Series Two. Data for Figures 2(a) through (d) in Table 5.1. Measurements are shown as dashed lines, the HGRF result as a solid line and the GRF result as a dotted line.  Fusion Case Figure l a Figure lb Figure l c Figure Id Figure le Figure If Figure 2a Figure 2b Figure 2c Figure 2d  Measurement 1 Mean Variance 1 5 1 5 1 5 1 5 1 5 1 5 1 5 1 5 1 5 5 1  Measurement 2 Mean Variance 5 1 1 5.5 1 6 6.5 1 1 7 1 10 8 1 2 8 8 4 8 10  HGRF Result Mean Variance 0.5 5 0.538 5.25 5.5 1 5.75 2.337 6 4 12.25 7.5 6.5 6.25 6 7.771 7.744 5.6 5.272 5.758  Table 5.1: One-Dimensional Redundant Measurements and Fused Results - Data. 67  GRF Result Variance 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.667 0.800 0.909  5.1.1 Discussion of Results In the first series of fusion examples the HGRF result's uncertainty increases as the measurements move apart, while the GRF result has equal variance for each of these cases. It can be seen from Figure 5.1(a) that when the measurements are coincident the H G R F and GRF results are identical. This satifies case 1 of the Heuristic Specification of Section 3.1.3. Figure 5.1(b) demonstrates case 2 of the Heuristic Specification as the measurements are in "agreement" and the uncertainty of the H G R F result is less than the measurement's uncertainties. Figure 5.1(c) represents the borderline between cases 2 and 3 of the Heuristic Specification and the HGRF result's uncertainty is equal to the measurement's uncertainties. Figure 5.1(d) demonstrates case 3 of the Heuristic Specification as the measurements "disagree" and the uncertainty of the H G R F result is larger than the measurement's uncertainties. Figure 5.1(e) represents the borderline between Heuristic Specification cases 3 and 4 since the measurements are spaced by the sum of their standard deviations. The HGRF result satisfies the Heuristic Specification as it spans the range of the measurement's uncertainty ellipsoids. The final result displayed in Figure 5.1(f) corresponds to case 4 of the Heuristic Specification. Again the H G R F result satisfies the Heuristic Specification as it spans the range of the measurement's uncertainty ellipsoids. Figure 5.2 displays a sequence of fused measurements where the means of the measurements are unchanged and the difference in the measurement's uncertainties increases throughout the sequence. As the second measurement becomes more uncertain the fused mean approaches the first measurement's mean. The uncertain measurement's contribution is increasingly discounted as it becomes relatively more , uncertain. This is a desired outcome resulting from the GRF method of calculating the fused mean. The measurements in Figure 5.2(a) and (b) represent case 4 of the Heuristic Specification. Figure 5.2(c) represents the borderline between cases 3 and 4 while Figure 5.2(d) demonstrates case 3 of the Heuristic Specification. The H G R F result of each of these cases is more uncertain than the minimum measurement uncertainty. As the variance of the second measurement increases throughout the series, the H G R F result approaches the G R F result, which correctly approaches the uncertainty of the first measurement. At the extreme case where one measurement's uncertainty is infinity, the H G R F result and the G R F result equal the less uncertain measurement.  This is an expected result since i f two  measurements of vastly differing uncertainties are fused the highly uncertain measurement should be ignored. A discrepancy between the H G R F results and the Heuristic Specification occurs when measurements have differing uncertainties, as is demonstrated in Figure 5.2(c) and (d). In these examples, the mean is pulled closer to the more certain measurement, while the fused uncertainty is determined by using the maximum Measurement Spacing Vector. As a result the HGRF method's fused ellipsoid overlaps the more certain measurement and leaves the more uncertain measurement partially uncovered. To address  68  this problem, a non-Guassian distribution is required so that the GRF mean can be maintained while the fused uncertainty region still spans the range of the measurement's ellipsoids. However, this deviation from the Heuristic Specification is accepted since Guassian distributions have been chosen as the general representation for uncertainty in this approach. The above simulation results indicate that the HGRF method satisfies the Heuristic Specification for one-dimensional redundant fusion except in the case of differing measurement uncertainties.  5.2 Fusion of Two Two-Dimensional  Measurements  This section examines examples of the fusion of two two-dimensional measurements using the HGRF method. Three sets of results are given in Figure 5.3, 5.4 and 5.5. The corresponding data is given in Table 5.2. The Matlab program used to generate these results is included in Appendix B.  (c)  (d)  Figure 5.3: Fusion of Two Two-Dimensional Measurements - Results Set One. Data for Figure 3(a) through (d) is given in Table 5.2. Measurements are shown as dashed lines, the GRF result is shown as a dotted line and the HGRF result is shown as a solid line.  69  4  5  6  7  8  10  11  12  3  4  5  6  7  (a)  7  8  8  9  10  11  12  9  10  11  • 12  (b)  9  10  11  12  3  4  5  6  7  8  (d)  (c)  Figure 5.4: Fusion of two two-dimensional measurements - Results Set Two. Data for Figure 4(a) through (d) is given in Table 5.2. Measurements are shown as dashed lines, the GRF result is shown as a dotted line and the HGRF result is shown as a solid line.  70  Figure 5.5: Fusion of two two-dimensional measurements - Results Set Three. Data for Figure 5(a) through (d) is given in Table 5.2. Measurements are shown as dashed lines, the GRF result is shown as a dotted line and the HGRF result is shown as a solid line.  71  Fusion Case Figure 3a  Measurement 1 Mean Variance [10] [5 5] [0 1]  Figure 3b  [5 5]  Figure 3c  [5 5]  Figure 3d  [5 5]  Figure 4a  [5 5]  Figure 4b  [5 5]  Figure 4c  [5 5]  Figure 4d  [5 5]  Figure 5a  [5 5]  Figure 5b  [5 5]  Figure 5c  [5 5]  Figure 5d  [5 5]  [10] [0 1] [10] [0 1] [10] [0 1] [10] [0 1] [10] [0 1] [10] [0 1] [10] [0 1] [10.1] [0.1 4] [4 1] [1 1] [9 1.5] [1.5 1] [9 1.5] [1.5 1]  Measurement 2 Mean Variance [1 0] [5 5] [0 l] [l 0] [0 1]  [6 5]  [1 0] [0 1] [1 0] [0 1] [1 0] [0 1] [1 0] [0 1] [1 0] [0 1] [1 0] [0 i] [11] [1 9] [1 1] [1 8] [• 1] [1 4] [1 1] [1 4]  [7 5] [10 5] [5 5] [5.707 5.707] [6.414 6.414] [8.5355 8.5355] [7 6] [6.5 7] [8 9] [5 7]  H G R F Result Mean Variance [0.5 0] [5 5] [0 0.5] [5.5 5] [6 5] [7.5 5] [5 5] [5.354 5.354] [5.707 5.707] [6.768 6.768] [7.143 5.914] [5.923 4.568] [6.098 5.402] [4.914 5.071]  G R F Result Variance [0.5 0] [0 0.5] :•  [1 0] [0 0.5]  [0.5 0] [0 0.5]  [4 0] [0 0.5] [12.25 0] [0 0.5] [0.5 0] [0 0.5] [0.75 0.25] [0.25 0.75] [2.25 1.75] [1.75 2.25] [6.38 5.88] [5.88 6.38] [2.10 3.16] [3.16 9.43] [4.13 1.66] [1.66 3.48] [1.36 0.97] [0.97 2.12] [0.77 0.26] [0.26 0.71]  [0.5 0] [0 0.5] [0.5 0] [0 0.5] [0.5 0] [0 0.5] [0.5 0] [0 0.5] [0.5 0] [0 0.5] [0.5 0] [0 0.5] [0.77 0.26] [0.26 0.69] [0.48 0.19] [0.19 2.74] [0.76 0.24] [0.24 0.76] [0.77 0.26] [0.26 0.69]  .  Table 5.2: Fusion of Two Two-Dimensional Redundant Measurements - Data. Thefirsttwo sets of fusion results, given in Figure 5.3 and Figure 5.4, demonstrate the reference frame independence of the HGRF method. These sets of measurement data are identical except that they are expressed in reference frames that are rotated with respect to one another. The results generated by the HGRF method for both sets of measurements are also identical relative to an aligned frame of reference. Therefore, the HGRF method produces consistent results regardless of the frame of reference used to express the data. This result requires that the frames of reference used are linear orthogonal frames and that all measurements to be fused are expressed in the same frame of reference. The third set of data, shown in Figure 5.5, demonstrates the multi-dimensional characteristics of the fusion method. The measurements in Figure 5.5(a) are non-overlapping and the resulting HGRF fused ellipsoid has a large uncertainty in the vertical dimension and a smaller uncertainty in the horizontal dimension. This results from a large separation in the vertical dimension, while the measurements overlap along the horizontal dimension. The fusion of these two measurements also demonstrates the previously mentioned discrepancy between the HGRF method and the Heuristic Specification. The HGRF result protrudes below the measurements more than is required by the specification. In Figure 72  5.5(b) the  measurements are again non-overlapping. The HGRF result is more uncertain in the horizontal  dimension while being less uncertain in the vertical dimension. This results from the overlap in the measurement's vertical dimension and lack of overlap in the measurement's horizontal dimension. Figure 5.5(c)  displays two measurements that overlap. The HGRF result displays increased uncertainty over the  GRF result. This results from the amount of separation of the measurement's means. Figure 5.5(d) displays the fusion of two measurements that have a small separation of means relative to their uncertainty. In this case the HGRF result is very close to the GRF result and exhibits a large decrease in uncertainty relative to the measurements' uncertainties. The examples in this section demonstrate the HGRF method's consistent output regardless of the frame of reference used to express the measurements. Further, these results demonstrate the method's ability to operate in higher dimensional measurement spaces.  5.3 Fusion of Three Two-Dimensional  Measurements  This section provides examples of the fusion of three two-dimensional redundant measurements by using the HGRF method. The Matlab program used to generate these results is included in Appendix B. Four fusion examples are displayed in Figure 5.6 through Figure 5.9 and the corresponding data can be viewed in Table 5.3.  3 2 <^  1  1  2  I  3  I  1  4  5  :  I  6  I  7  J  8  1  9  |  10  Figure 5.6: Fusion of Three Two-Dimensional Measurements - Example One. Measurements are shown with dashed lines, the HGRF result with a solid line and the GRF result with a dotted line. 73  10i 9 8  3^  \  2 1  1  2  3  4  5  6  7  8  9  .  10  Figure 5 . 7 : Fusion of Three Two-Dimensional Measurements - Example Two. Measurements are shown with dashed lines, the HGRF result with a solid line and the GRF result with a dotted line.  Figure 5 . 8 : Fusion of Three Two-Dimensional Measurements - Example Three. Measurements are shown with dashed lines, the HGRF result with a solid line and the GRF result with a dotted line.  74  10,  L  1  1  2  I  1  3  4  i  5  6  i  i  7  i  8  i  i  9 1 0  Figure 5.9: Fusion of Three Two-Dimensional Measurements - Example Four. Measurements are shown with dashed lines, the HGRF result with a solid line and GRF result with a dotted line.  Fusion Case Figure 6 Figure 7 Figure 8 Figure 9  Measurement 1 Mean Var [1 0] [5 5] [0 1] [1 1] [5 5] [16] [1 0] [3 3] [0 1] [1 0] [5 5] [0 1]  Measurement 2 Mean Var [4.5 [10] 5.1] [0 1] [5.4 [3 0.7] 5.2] [0.7 1.5] [6.0 [2 1.3] 5.5] [1.3 3] [10] [7 5] [0 1]  Measurement 3 Mean Var [5.5 [10] 5.5] [0 1] [4.9 [4 0] [0 2] 5.7] [6.3 [1 0] 5.0] [0 9] [10] [5 7] [0 1]  HGRF Result Mean Var [1.10 0.44] [5 5.2] [0.44 0.64] [5.12 [0.62 0.10] 5.32] [0.10 0.98] [4.79 [9.50 3.94] [3.94 5.50] 3.65] [5.67 [5.65 -1.12] 5.67] [-1.12 3.97]  GRF Result Variance [0.33 0] [0 0.33] [0.58 0.15] [0.15 0.73] [0.38 0.07] [0.07 0.65] [0.33 0] [0 0.33]  Table 5.3: Fusion of Three Two-Dimensional Redundant Measurements - Data.  Example One, shown in Figure 5.6, consists of three closely spaced measurements of equal variance. The means of Measurement Two and Three lay outside of each other's uncertainty ellipsoids indicating that the measurements "disagree". Measurement One "agrees" with both Measurements Two and Three. This spacing of the measurements results in an increase in uncertainty along one dimension of the H G R F result and a decrease along the other dimension. The principal axes of the H G R F result measure 1.17 and 0.69 compared with the measurement's principle axes that all measure 1.0.  75  The second example, displayed in Figure 5.7, consists of three measurements that are all in "agreement". Therefore, there is a decrease in uncertainty along all dimensions of the H G R F result, in comparison to the measurement's uncertainties. This example displays the potential benefits of redundant fusion when measurements are in "agreement". The third example, shown in Figure 5.8, includes two measurements that "agree" and a third measurement that does not overlap with either of the other measurement's uncertainty ellipsoids. This example indicates that the disagreeing sensor has likely failed, however, it may be that the two sensors that agree are in fact in error.  Therefore, the H G R F method produces a highly uncertain result in  comparison to the measurement's uncertainties. This increase in uncertainty can trigger a higher-level process to determine and correct the sensor failure. The fourth example consists of three measurements of equal variance that do not overlap each other. This can be viewed in Figure 5.9. This is a case of sensor failure that is not easily diagnosed from the data given. The H G R F method outputs a result that is highly uncertain which essentially encompasses the measurement's uncertainty ellipsoids. The principal axes of the H G R F result measure 3.45 and 1.76 in comparison to the measurement's uncertainty ellipsoid principal axes measuring 1.0 each.  5.4 Summary of Simulation Results The simulation results presented in this chapter reveal that the H G R F method satisfies the Heuristic," Specification of Section 3.1.3 with one exception. The failure to meet the Specification occurs when the measurements to be fused have differing uncertainties. In this case the fused ellipsoid overlaps the more certain measurement and does not cover all of the more uncertain result. This limitation is a result of using a Gaussian distribution to represent uncertainty. The proposed fusion method produces consistent results regardless of which frame of reference is used to express the measurements. The measurements to be fused must be expressed in the same frame of reference. This can be accomplished by using similarity transformations.  76  Chapter 6  6 Application Example: A Two-Degree-ofFreedom Robot  6.1 Experimental Setup An experimental setup was developed with the purpose of testing the ELD Architecture and the proposed redundant fusion method. The setup consists of a two-degree-of-freedom robot whose endeffector moves in a plane. The setup is referred to as the XY Table. A picture of the XY Table is included in Figure 6.1 and a top view diagram is given in Figure 6.2. The setup was designed and built in the Industrial Automation Laboratory.  6.1.1 Physical System A DC motor, shown in Figure 6.3, drives the X-Axis of the robot. The Y-Axis translates along the XAxis on two linear bearings and is connected to the motor by a ball screw. A stepper motor drives the YAxis. The end-effector translates along the Y-Axis on two small linear bearings and is connected to the stepper motor via a belt, as is shown in Figure 6.4. Due to the quality of the ball screw and bearings used there is a significant amount of play in the system. This limits the positional accuracy of the robot but does not hinder the experiments related to this work.  77  Figure 6.1: XY Table Experimental Setup.  Top View  Figure 6.2: Top View Diagram of the XY Table.  78  Figure 6.3: X-Axis Drive and Encoder.  Figure 6.4: Y-Axis Drive and Ultrasound Sensor.  79  The two-dimensional position of the end-effector is sensed redundantly. A n encoder is mounted to the shaft of the X-Axis motor and is used to measure the position of the X-Axis of the robot. The encoder is shown in Figure 6.3 and is a 1000 line encoder with quadrature decoding. This yields a resolution of 0.09 degrees allowing a positional measurement resolution of 0.003175 mm along the X-Axis.  The  measurement of the actual X position of the end-effector using the encoder is considerably less accurate than the resolution available due to the play in the system. The quantification of the accuracy of this measurement is discussed in detail in a following section. An ultrasound sensor is mounted as shown in Figure 6.4 and is used to determine the position of the Y-Axis. The sensor emits an ultrasound pulse, which reflects off of a metal surface and is received again by the sensor. The time of flight is used to determine the distance to the metal surface. The ultrasound sensor is an analog sensor and is prone to noise. The quantification of the accuracy of this sensor is also discussed in detail in a following section. Together the encoder and the ultrasound sensor indirectly sense the position of the end-effector. The measurements are indirect since the encoder is actually measuring the number of turns the ball screw makes and the ultrasound sensor is measuring the position of a metal surface rather than measuring the actual position of the end-effector. Both of these sensors are fast and are suitable for real-time control. A JVC 1070U colour digital camera is mounted above the robot and provides a top view of the entire workspace. The digital camera is connected to an Imaging Technologies IC2 PCI frame grabber mounted in a Pentium II 750 M H z PC. This hardware is able to grab a digital image with 640x480 pixels in approximately 30 ms. The digital image is processed in order to detect the position of the end-effector. The sequence of grabbing the image, transferring it to the PC's R A M and processing the data requires approximately 200 ms, making this sensor too slow for real-time control of the X Y Table. This sensor directly senses the position of the end-effector but is dependent on the lighting conditions. A lighting system is installed, as can be seen in Figure 6.1, which minimizes the variability in the lighting conditions. The details of the design of this sensor and the determination of its accuracy is discussed in a following section.  6.1.2 Implementation Platforms The X Y Table E L D Architecture is implemented across two hardware platforms.  The high-level  components that do not have real-time constraints are implemented using Visual C++ code running under the Microsoft N T 4.0 operating system and executing on a Personal Computer (PC). This PC will be referred to as the high-level PC. The low-level real-time components are implemented in the Open RealTime Operating System (ORTS) [4], which is executed on a TI C32 Digital Signal Processor (DSP). The components implemented on this platform are programmed using ORTS scripts and C functions that are written on a PC and downloaded to the DSP.  80  The hardware setup of the X Y Table is shown below in Figure 6.5. A PC, referred to as the low-level PC, houses the DSP and the MFIO card, which is both an analog and digital Input/Output card. The sensors and actuators connect to the MFIO card. ORTS, running on the DSP, can directly access sensor and actuator data through a direct link to the MFIO card. Through an ORTS feature the DSP is able to communicate with the N T operating system on the low-level PC.  The high-level PC houses and  communicates with the frame grabber, which receives the camera signal. The N T platform, on the highlevel PC, and the DSP platform, on the low-level PC, reliably communicate across an N T network by using N T pipes.  in  m Encoder Signal  Ultrasound Signal  T  -----  i  T Y-Axis Motor Signal  X-Axis Motor Signal  Camera Signal High-Level PC Running NT  Low-Level PC Running NT DSP to PC NT Pipe  Frame Grabber  MFIO U».| DSP PC to DSP NT Pipe  Figure 6.5: X Y Table Hardware Setup.  6.1.3 ELD Architecture Design The sensors and actuators of this system have been integrated using the E L D Architecture.  The  design of the architecture can be seen in Figure 6.6. At the lowest level of the architecture there are three ELDs: the Encoder, Ultrasound and Image ELDs. These are simple ELDs that interface the physical sensors with the software E L D Architecture.  81  'Planner ELD  High Level Platform mplementation (NT) *Split Platform Implementation (NT and DSP)  Pointer Locator ELD  Encoder  X Axis DC Motor  Y Axis Stepper Motor  Ultrasound Sensor  Camera  Figure 6.6: X Y Table E L D Architecture.  6.1.3.1 Low Level ELDs The Encoder E L D uses the encoder's signal to determine the X-Axis position. This E L D has its realtime components implemented under ORTS on the DSP and its high-level components implemented in Visual C++ under NT. This internal platform split is joined using an N T pipe communication channel. The Encoder E L D is connected to the X-Axis E L D . 82  The Ultrasound ELD receives the ultrasound sensor's signal and from it determines the Y-Axis position. The implementation of the Ultrasound ELD is similar to the implementation of the Encoder ELD. The Ultrasound ELD provides input to the Y-Axis ELD. The Image ELD triggers the frame grabber to acquire an image and places that image in PC RAM. This ELD is entirely implemented in Visual C++ on the high-level PC. The Image ELD provides input in the form of a digital image to the Pointer Locator ELD. An example of the type of images taken by the Image ELD is shown in Figure 6.7.  W  End-Effector Dot  Calibration Dots 50 mm  # • •  Figure 6.7: Overhead Image Grabbed by the Image ELD.  6.1.3.2 Pointer Locator ELD The Pointer Locator ELD resides at the next level of the architecture. This ELD receives the Image ELD input processes it and outputs the two-dimensional position of the end-effector. The X-Axis and YAxis ELDs receive the output. Processing of the image involves a calibration and an end-effector detection routine. 83  Pixels in the image do not linearly map to real world coordinates. Therefore, a mapping is required to transform pixel coordinates to real world coordinates. The mapping is defined by an automated routine where a grid of red calibration dots of known real world positions is detected. The entire grid of dots is not viewable at the same time since the Y-Axis of the robot occludes part of the grid, as is seen in Figure 6.7. Therefore, the Pointer Locator E L D must request knowledge about the current position of the X Axis and request motion from the X-Axis E L D if it is required. The calibration dot pixels are selected using colour thresholds and the dot centers are determined by using a center of area calculation. This calibration routine is run during the initialization of the system. A bilinear interpolation algorithm uses this information to determine the real world position of pixels. The end-effector is marked with a blue dot. This dot is detected in the image using a method similar to the calibration dot detection method. However, this dot is not in the plane of the grid of the calibration dots. Therefore, a geometric adjustment algorithm is required to compensate for the difference in heights and correctly determine the position of the end-effector in real world coordinates.  6.1.3.3 X-Axis and Y-Axis ELDs The next level in the X Y Table E L D Architecture consists of the X-Axis and Y-Axis ELDs, which are similar in design. The X-Axis E L D receives input from the Encoder E L D and the Pointer Locator E L D and outputs a controlling signal to the X-Axis motor. Similarly, the Y-Axis E L D receives input from the Ultrasound E L D and the Pointer Locator E L D and outputs a controlling signal to the Y-Axis motor. Both of these ELDs have all the components required to be a stand alone feedback control loop. The X and Y Axis ELDs both provide input to the Planner E L D . As for the Encoder and Ultrasound ELDs, the X and Y-Axis ELDs have their real-time components implemented on the DSP platform and their high-level components on the N T platform. The X and Y-Axis ELDs both receive and fuse redundant sensor input. The Pointer Locator E L D provides a two-dimensional position measurement, which is first projected onto the appropriate axis using the projection method described in section 3.3.  Following the projection, the two one-dimensional  measurements are redundantly fused using the method described in Chapter 3. This fused value is used as a position feedback signal in the control of the respective robot axis. The X and Y-Axis control loops run on the DSP at 200 Hz while the Pointer Locator E L D is only able to provide output at a rate of 5 Hz. The use of fusion with multi-rate feedback signals in realtime motion control has been examined in the Industrial Automation Laboratory by Langlois [20].  6.1.3.4 Planner ELD The Planner E L D is the object that coordinates the overall action of the X Y Table. It is connected to the X and Y-Axis ELDs and provides a controlling signal in the form of set points to those ELDs. This  84  ELD contains the programs for the high-level tasks for the X Y Table. Since the focus of this work is on the fusion of redundant data within the ELD Architecture no high-level tasks have been implemented beyond simple set point changes. An example of a suitable high-level task is recognizing an object in the workspace and aligning the end-effector with the object's position. This would require additional ELDs to be added to the system.  6.1.4 Graphical User Interface The GUI developed for the X Y Table is implemented in Visual C++ and executes on the high-level PC. A screen shot of the GUI is given in Figure 6.8. NT pipes, established in the ELD Architecture base class, are used to communicate between the GUI and the ELD Architecture. As part of the base GUI design, an edit box is included at the bottom of the window where messages from the ELDs can be easily displayed. Various functions specific to the X Y Table application have been added to the GUI. The GUI can be used to display the image as seen by the Image ELD. The GUI also provides a tool to quickly modify and test the threshold values used in the Pointer Locator ELD. This allows quick testing of the Pointer Locator ELD functions. Among other features, an edit box has also been added that allows the user to input a desired set point of the X-Axis.  Blip Cat Dots ThtesticWs List | Cassation Dot EE Dot IGieenDol Thteshold  a |o~  TestXLmtl TostYLmUl  System Messages [10027393S2s949rc Image Displayed) (1002739362s597ms. JVC mage isplayedlH002739041 s376m< Image OrcpWed: |1O02739041s45ms. JVC mage displayed!  ~3 J  Figure 6.8: XY Table Graphical User Interface.  85  T|  6.2 Calibration of Sensors The uncertainty of the output of the Encoder ELD, Ultrasound ELD and Pointer Locator ELD was determined through experimentation.  A world reference frame was established using right angled  machinist's blocks as is depicted in Figure 6.9. This reference frame was used to compare all measurements taken by the various XY Table sensors. The world and sensor reference frames all have parallel axes but different origins. Therefore, the offset between the world and sensor reference frames had to be determined. This was accomplished by repeated measurements using verniers, a dial guage and a height guage. The measurement data is given in Appendix D. Once the relationships between the reference frames were established the uncertainty of the end-effector position sensors could be established.  Figure 6.9: Sensor Calibration Setup.  The uncertainty was determined through experimentation.  The position of the end-effector was  measured at various positions throughout the workspace using a dial guage mounted to the right angled blocks, as is shown in Figure 6.9. This measurement was used as the true value.  Corresponding  measurements were taken with the Encoder ELD, Ultrasound ELD and Pointer Locator ELD. Again this data may be viewed in Appendix D. The uncertainties of the sensors used in the XY Table were determined from the variance of these repeated measurements. The results are reported in Table 6.1.  86  Sensor  Encoder Ultrasound Pointer Locator  X-Axis Variance (mm)  X-Axis Sensor Noise (mm)  3.4587  1.0X10"  N/A  N/A  4.1990  0.014  ZU  Y-Axis Variance (mm) N/A  Y-Axis Sensor Noise (mm) N/A  3.7039 3.0350  0.014 0.0072  Table 6.1: Uncertainties of X Y Table Sensor Measurements.  The Encoder, Ultrasound and Pointer Locator E L D sensors exhibit much lower levels of static measurement noise than the uncertainty levels reported above. The levels of static measurement noise, given as variance measured in mm, for the Encoder, Ultrasound and Pointer Locator ELDs are included in Table 6.1. The high level of measurement uncertainty results from the indirect method that the encoder and ultrasound use to measure the position of the end-effector.  This increased uncertainty can be  attributed to the play in the system. The increased level of measurement uncertainty for the Pointer Locator E L D results from inaccuracies in detecting the end-effector and calibration dots.  6.3 Redundant Fusion Results Data was collected during the execution of two different trajectories with the X Y Table. The results of the fusion of this data is presented and discussed in this section. The X and Y-Axis ELDs used the multirate feedback control strategy proposed by Langlois in [20] to control the X Y Table's motion. The Encoder and Ultrasound E L D ' s sensor measurements were first extended into a two-dimensional ellipsoid representation. This was done using the method presented in Section 3.3. The encoder axis is perpendicular to the ultrasound axis making the extension operation simple in this case. The Pointer Locator E L D provides a measurement of the position of the end-effector that is redundant to the extended Encoder-Ultrasound measurement.  These redundant measurements are then fused using the H G R F  method. The fusion results presented and discussed below were calculated offline.  6.3.1 Figure Eight Trajectory The first trajectory, referred to as the figure eight, is shown in Figure 6.10. The figure displays the Encoder-Ultrasound, Pointer Locator and H G R F result for one cycle of the trajectory. Two enlarged views, shown in Figure 6.11 and Figure 6.12, are discussed in detail.  87  250 160  180  200  220  240 260 X-Axis (mm)  280  300  320  Figure 6.10: Figure Eight Trajectory Fusion Results. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements and dotted line and diamonds - HGRF fused measurements. 3451  340  330  325 180 L  185  190 X-Axis (mm)  195  200  Figure 6.11: Enlarged View #1. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements, and dotted line and diamonds - HGRF fused measurements.  88  290  285  E E  m  280  275  2701  200  —  >  •  205  210  '  1  215  220  225  X-Axis (mm) Figure 6.12: Enlarged View #2. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements, and dotted line and diamonds - HGRF fused measurements.  In general throughout the figure eight trajectory the Encoder-Ultrasound and Pointer Locator measurements "agree". Therefore, the fusion of the redundant measurements generally results in a decrease in uncertainty. An example of such a case is shown in Figure 6.12.  All three sets of  measurements in this figure clearly "agree" and the HGRF result lies inside both of the redundant measurements. However, there are instances where the sensor measurements "disagree" as shown in Figure 6.11. The measurements in this figure differ by 2.5 to 3.0 mm along the X-axis. This results in an increase in uncertainty along this direction. The HGRF result is seen to span the range of the measurement's uncertainty ellipsoids in this case.  6.3.2 Square Trajectory The second set of results discussed is the square trajectory, shown in Figure 6.13. Two enlarged views are included in Figure 6.14 and Figure 6.15, and are discussed in detail.  89  X-Axis (mm) Figure 6.13: Square Trajectory Fusion Results. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements, and dotted line and diamonds - HGRF fused measurements. 2551-  250 E E, w  245  240 205  1  210  1  1  215  J  220  1  225  230  X-Axis (mm) Figure 6.14: Enlarged View #3. Solid line and crosses - Pointer Locator measurements, dashed line and stars - Encoder-Ultrasound measurements, and dotted line and stars - HGRF fused measurements. 90  345  340  h  325  I  1  185  190  1  L  :_  195 200 X-Axis (mm)  ,  ,  205  210  Figure 6.15: Enlarged View #4. Solid line and crosses - Pointer Locator measurements, dashed line and stars - Encoder-Ultrasound measurements, and dotted line and stars - HGRF fused measurements.  As was the case with the figure eight trajectory, in general throughout the square trajectory the Encoder-Ultrasound and Pointer Locator measurements agree. Therefore, the fusion of the redundant measurements generally results in a decrease in uncertainty. An example of such a case is shown in Figure 6.15. All three sets of measurements in this figure clearly "agree" and the HGRF result lies inside both of the redundant measurements. However, there are instances where the sensor measurements "disagree" as shown in Figure 6.14. The measurements in this figure differ by 2.0 to 5.0 mm. This results in an increase in uncertainty. The HGRF result is seen to span the range of the measurement's uncertainty ellipsoids in this case.  6.4  Summary  The ELD Architecture has been shown to be effective for integrating multiple sensors and actuators in an experimental automation workcell.  The XY Table implementation has also exhibited the  Architecture's ability to operate on multiple platforms.  Further the ELD Architecture has been  successfully used to bridge high-level and low-level real-time tasks.  91  The H G R F method has been demonstrated as effective at combining redundant measurements in a real system. The uncertainty of H G R F result is decreased when the redundant measurements agree and is increased when there is a discrepancy.  92  Chapter 7 7 Concluding Remarks 7.1  Summary and Conclusion  In this work the Heuristic based Geometric Redundant Fusion method has been presented as a method that reliably fuses redundant sensor measurements over all ranges of measurement disparity. This method is developed with an emphasis on the fusion of sensor data provided by a small number of sensors, as is encountered in industrial automation workcells.  Specific cases of complementary fusion including  dimensional extension, uncertainty modification and range enhancement were discussed in this work as well. The projection function was introduced as a tool that manipulates the dimensionality of uncertainty ellipsoids by projecting the ellipsoid into a lower dimension to allow high dimensional data to be used in a lower dimensional context. Further, the ELD Architecture has been presented as a framework that facilitates the efficient design, implementation and maintenance, as well as the reliable operation of automation workcells. The architecture is useful for the systematic integration of multiple sensors and actuators and provides a framework in which the developed fusion mechanisms can be applied. The research objectives considered, and obtained to some degree, in this work include: 1. To make a contribution towards the development of automation systems that are efficiently implemented, maintainable and reliably operating. 2. To specify and implement a modular architecture that allows the intelligent integration of sensors and actuators in an industrial automation workcell environment. 3. To develop tools for fusing redundant uncertain sensor data. 93  4.  To develop tools for fusing complementary uncertain sensor data.  The H G R F method is an extension of the G R F method [31] and is capable of fusing m redundant measurements in ^-dimensional space. The uncertainty ellipsoid representation, based on the Guassiah distribution, is utilized by this method. This limits the application of this method to linear measurement spaces and to situations where a linear approximation to a non-linear measurement space is acceptable. The uncertainty of the H G R F result increases as the level of disparity of the measurements increases. This ensures a realistic estimate of the uncertainty of the data even when unexpected sensor errors occur and when a small amount of data is available, as is common with automation workcells. The dimensional extension function is useful for combining measurements taken of the same feature in non-overlapping measurement spaces. The uncertainty modification mechanism is applicable when a sensor's measurement can be used to modify the uncertainty of another's measurement.  The range  enhancement mechanism is useful for combining different measurements taken in overlapping measurement spaces. Finally, the projection function is a method that projects an ellipsoid into a lower dimension. The E L D Architecture allows systematic and efficient implementation of automation workcells across multiple hardware and software platforms. The E L D Architecture is based upon the L S and includes sensor and actuator functionality. Within the E L D the uncertainty of all sensor data is quantified and manipulated using the sensor fusion mechanisms detailed in this work. Consideration of the uncertainty of sensor data will enhance the operational reliability of industrial automation workcells. In conclusion, the contributions that this work has made in the area of sensor fusion includes: 1. The development of the HGRF method as a method that reliably fuses redundant sensor data over all ranges of measurement disparity. 2.  The specification of an architecture based complementary fusion approach that utilizes dimensional extension, uncertainty modification and range enhancement mechanisms.  Additionally, the contributions that this work has made towards L S based architectures through the E L D specification includes: 1. The inclusion of actuation within the E L D in a real-time control context. 2.  Specifying that all data within the E L D architecture be represented as uncertain.  3.  The integration of redundant and complementary fusion mechanisms within the E L D .  7.2 Recommendations This work has provided a reliable redundant fusion mechanism in the H G R F method; however, the larger and more diverse area of complementary fusion requires further attention. Much of the work in this area has been application specific and a sufficiently general framework for complementary fusion would  94  benefit both further research and industrial implementation.  The discussions in this work regarding  dimensional extension, uncertainty modification and range enhancement is a beginning of this framework, however, a much more thorough and rigorous approach is required. Investigation into implementing the fusion mechanisms presented in this work using fuzzy logic is recommended. A fuzzy logic implementation may be more industrially acceptable than the geometric uncertainty ellipsoid based methods presented herein. Ideally, tools based on both representations could be used interchangeably within the E L D Architecture.  This would require developing methods to  interface the uncertainty representations. This interface may be useful as well for future complementary fusion mechanisms that may be best suited to a fuzzy implementation. The E L D Architecture, as implemented in this work, requires many functional extensions to make it an industrially useable product. Firstly, a detailed design is required for the Planner and Knowledge Base to allow systematic implementation of high-level intelligence within the E L D . Secondly, a detailed design of the Actuator component is required to make this a useful industrial tool for low-level control. This would involve implementing controllers, tuning tools and automated controller selection mechanisms. Additionally, the E L D Architecture would have to be implemented on platforms suitable for real-time control. Further refinement of the Architecture Builder Tool will increase the industrial applicability of the E L D Architecture as well.  Specifically, the implementation of a library of ELDs would increase the  efficiency of implementing an E L D application. Additionally, further development of the E L D library interface in the Builder Tool to include on the fly E L D configuration would be useful. For example, an Image E L D could be developed for the library that is configurable to interface with a number of frame grabbers and cameras.  This E L D would also have predefined and configurable run-time G U I  functionality such as, displaying the camera image on the run-time GUI. Finally, the functionality of the Builder Tool should be extended for automated multiple platform implementation. The eventual fully functional Builder Tool would not require the developer to program any components of the E L D Architecture outside of the Builder Tool environment. Every component of the Architecture would be selectable from a list of predefined functions or able to be custom programmed within the Builder Tool environment.  95  References [1]  M . A b i d i a n d R. G o n z a l e z , eds.,  Data Fusion in Robotics and Machine Intelligence,  B o s t o n , A c a d e m i c Press, 1992.  [2]  J . A l b u s , ' R C S : A R e f e r e n c e M o d e l A r c h i t e c t u r e f o r Intelligent C o n t r o l , ' i n Computer, V o l . 2 5 , Issue 5, p p . 56-59, I E E E , 1992.  [3]  J. Albus, H . M c C a i n , Telerobot  Control  ,  :  a n d R. L u m i a , ' N A S A / N B S  System Architecture  Standard R e f e r e n c e  (Nasrem),' Tech. Report  M o d e l for  1,235, N a t T  Inst.  Standards a n d T e c h n o l o g y , G a i t h e r s b u r g , M d . , 1989.  [4]  Y . Altintas, and N . A . Erol, 'Open Architecture M o d u l a r T o o l K i t for M o t i o n and M a c h i n i n g Process C o n t r o l , ' i n  [5]  Annals ofCIRP, V o l . 4 7 , N o . 1, 1998.  R. B r o o k s , ' A R o b u s t L a y e r e d C o n t r o l S y s t e m for a M o b i l e R o b o t , ' i n  IEEE Journal of  Robotics and Automation, V o l . R A - 2 , N o . 1, 1986.  [6]  R. B r o o k s , ' I n t e l l i g e n c e W i t h o u t R e p r e s e n t a t i o n , ' i n  Artificial Intelligence, v o l . 4 7 , p p .  139-159, 1991.  [7]  H . B r u y n i n c k x , and J. DeSchutter, ' T h e Geometry o f A c t i v e Sensing,' submitted to  Automatica, J u l y 2 0 , 1999. th  [8]  J . B u d e n s k e and M . G i n i , ' S e n s o r E x p l i c a t i o n : K n o w l e d g e - B a s e d R o b o t i c P l a n E x e c u t i o n through L o g i c a l Objects,' in  IEEE Transactions on Systems, Man and Cybernetics, V o l .  27, Part B . N o . 4, p p . 6 1 1 - 6 2 5 , 1997.  96  [9]  D . D u b o i s , a n d H . Prade, ' C o m b i n a t i o n o f F u z z y I n f o r m a t i o n i n the F r a m e w o r k o f Possibility Theory,' in  Data Fusion in Robotics and Machine Intelligence ( M . A b i d i a n d  R. G o n z a l e z , eds.), p p . 4 8 1 - 5 0 5 , B o s t o n , A c a d e m i c Press, 1992.  [10]  [11]  J . E l l i o t t , P e r s o n a l notes, 1995.  T . H e n d e r s o n and E. Shilcrat, ' L o g i c a l Sensor S y s t e m s , ' i n Journal  of Robotic Systems,  V o l . 1, N o . 2, p g l 6 9 - 1 9 3 , 1984.  [12]  T . H e n d e r s o n , C . H a n s e n and B . B h a n u , ' T h e S p e c i f i c a t i o n o f D i s t r i b u t e d S e n s i n g a n d Control,' in  [13]  Journal of Robotic Systems, V o l . 2, N o . 4, 3 8 7 - 3 9 6 , 1985.  T . H e n d e r s o n , P. A l l e n , I. C o x , A . M i t i c h e , H . D u r r a n t - W h y t e and W . S n y d e r , E d s . , ' W o r k s h o p o n M u l t i s e n s o r Integration i n M a n u f a c t u r i n g A u t o m a t i o n ' S n o w b i r d , U t a h , F e b r u a r y 4-7, 1987.  [14]  G . H e p p l e r and G . D ' E l e u t r i o ,  Newton's Second Law and All That: A Classical  Treatment of Mechanics, 1997.  [15]  IEEE,  PI 451.1 Draft Standardfor a Smart Transducer Interface for Sensors and  Actuators - Network Capable Application Processor (NCAP) Information Model, D l .83 ed., D e c . 1996.  [16]  IEEE,  P1451.2 Draft Standard for a Smart Transducer Interface for Sensors and  Actuators - Transducer to Microprocessor Communication Protocols and Transducer Electronic Data Sheet (TEDS) Formats, D 3 . 0 5 ed., A u g . 1997.  [17]  S. Iyengar, L. Prasad and H . M i n , Advances in Distributed Sensor Technology, U p p e r S a d d l e R i v e r , N J , Prentice H a l l , 1995.  [18]  P. K u m p e l a n d J . T h o r p e ,  Linear Algebra, Saunders C o l l e g e P u b l i s h i n g , T o r o n t o , 1983.  97  Architecture Builder User's Manual, Industrial A u t o m a t i o n L a b o r a t o r y , 2 0 0 1 .  [19]  F. L a m ,  [20]  D . Langlois, and E.A. Croft, ' L o w - l e v e l Data Fusion F o r Improved System Feedback,'  i n Proceedings of the IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems, A u g . 2 0 0 1 .  [21]  Chia-Hoang Lee, ' A Comparison o f T w o Evidential Reasoning Schemes,'  in Artificial  Intelligence, v o l . 35, pp. 127-134, 1988.  [22]  S u k h a n L e e a n d S. R o , ' U n c e r t a i n t y S e l f - M a n a g e m e n t w i t h P e r c e p t i o n N e t B a s e d Geometric Data Fusion,' in  Proceedings of the 1997 IEEE International Conference on  Robotics and Automation, pp. 2 0 7 5 - 2 0 8 1 , A p r i l 1997.  [23]  T . L e e , J . R i c h a r d s , a n d P. S w a i n , ' P r o b a b i l i s t i c a n d E v i d e n t i a l A p p r o a c h e s f o r M u l t i s o u r c e D a t a A n a l y s i s , ' i n IEEE  Transaction on Geoscience and Remote Sensing,  v o l . g e - 2 5 , n o . 3, M a y 1987.  [24]  R. L u o a n d M . L i n , ' M u l t i - S e n s o r Integrated Intelligent R o b o t f o r A u t o m a t e d A s s e m b l y , ' i n P r o c e e d i n g s o f the W o r k s h o p o n S p a t i a l R e a s o n i n g a n d M u l t i - S e n s o r F u s i o n , St. C h a r l e s , I L , p p . 3 5 1 - 3 6 0 , O c t . 1987.  [25]  R. L u o a n d M . K a y , ' M u l t i s e n s o r Integration a n d F u s i o n i n Intelligent S y s t e m s , ' i n  IEEE Transactions on Systems, Man and Cybernetics, V o l . 19, N o . 5, p p . 9 0 1 - 9 3 1 , I E E E , 1989.  [26]  A . M a h a j a n , K . W a n g , a n d P. R a y , ' M u l t i s e n s o r Integration a n d F u s i o n M o d e l that U s e s a F u z z y Inference S y s t e m , '  mlEEE/ASME  no. 2, p p . 188-196, June 2 0 0 1 .  98  Transactions on Mechatronics, v o l . 6,  [27]  K.  Marzullo,  'Tolerating  Failures  o f Continuous-Valued  Sensors,'  in  ACM  Transactions on Computer Systems, v o l . 4, no. 4, pp. 284-304, 1990.  [28]  R. M u r p h y and R. A r k i n , ' S F X : A n A r c h i t e c t u r e f o r A c t i o n - O r i e n t e d S e n s o r F u s i o n , '  in Proceedings of the 1992 IEEE/RSJ International Conference on Intelligent Robots and Systems, ( R a l e i g h , N C ) , pp. 1079-1086, I E E E , 1992.  [29]  R. M u r p h y , Robots,' i n  'Dempster-Shafer Theory  f o r Sensor F u s i o n  in Autonomous  Mobile  IEEE Transactions on Robotics and Automation, v o l . 14, no. 2, p p . 197-  206, A p r i l 1998.  [30]  M . N a i s h a n d E. C r o f t , ' E L S A : a m u l t i s e n s o r integration architecture f o r i n d u s t r i a l g r a d i n g tasks,' i n  [31]  Mechatronics, V o l . 10, N o . 1-2, pp. 19-51, 2 0 0 0 .  Y . N a k a m u r a and Y . X u , 'Geometrical Fusion M e t h o d Systems,' i n  for Multi-Sensor  Robotic  Proceedings of the IEEE International Conference on Robotics and  Automation, pp. 668-673, 1989.  [32]  M . P a c h t e r a n d P. C h a n d l e r , ' C h a l l e n g e s o f A u t o n o m o u s C o n t r o l , ' i n IEEE  Control  Systems Magazine, V o l . 18, N o . 4, pp. 92-97, 1998.  [33]  J . Paunicka, B . M e n d e l , and D . C o r m a n , ' T h e O C P - A n O p e n M i d d l e w a r e for  Embedded  Systems,'  in  Solution  Proceedings of the American Control Conference,  A r l i n g t o n , V A , p p . 3 4 4 5 - 3 4 5 0 June 25-27, 2 0 0 1 .  [34]  J . P e a r s o n , J . G e l f a n d , W . S u l l i v a n , R. Peterson, and D . S p e n c e , ' N e u r a l N e t w o r k A p p r o a c h to S e n s o r y F u s i o n , ' i n  Proceedings of the SPIE Sensor Fusion, v o l . 9 3 1 , C .  W e a v e r , E d . , p p . 103-108, A p r i l 1988.  [35]  F. P r o c t o r , and J . A l b u s , ' O p e n A r c h i t e c t u r e C o n t r o l l e r s , ' i n June 1997.  99  IEEE Spectrum, p p . 6 0 - 6 4 ,  [36]  J . R i c h a r d s o n , and K . M a r s h , ' F u s i o n o f M u l t i s e n s o r D a t a , ' i n The  International  Journal of Robotics Research, v o l . 7, no. 6, pp. 78-96, D e c e m b e r 1988.  Object Oriented Design Heuristics, A d d i s o n W e s l e y L o n g m a n , A p r i l 1996.  [37]  A . Riel,  [38]  H . Stephanou, and A . Sage, ' P e r s p e c t i v e o n Imperfect I n f o r m a t i o n P r o c e s s i n g , ' i n  IEEE Transactions on Systems, Man, and Cybernetics, v o l . s m c - 1 7 , no. 5, S e p t e m b e r / O c t o b e r 1987.  [39]  L . T s o u k a l a s , and R. U h r i g ,  Fuzzy and Neural Approaches in Engineering, T o r o n t o ,  W i l e y , 1997.  [40]  G . W e l l e r , F. G r o e n a n d L. Hertzberger, ' A Sensor P r o c e s s i n g M o d e l I n c o r p o r a t i n g E r r o r D e t e c t i o n and R e c o v e r y , ' i n  NATO ASI Series, Vol. F 63, Traditional and Non-  Traditional Robotic Sensors, p p . 3 5 1 - 3 6 3 , 1990.  [41]  World Robotics 2000, U n i t e d N a t i o n s and the International F e d e r a t i o n o f R o b o t i c s , Geneva (Switzerland), 2000.  [42]  R. Y a g e r , a n d A . K e l m a n , ' F u s i o n o f F u z z y I n f o r m a t i o n W i t h C o n s i d e r a t i o n s f o r C o m p a t i b i l i t y , P a r t i a l A g g r e g a t i o n , a n d R e i n f o r c e m e n t , ' i n International  Journal of  Approximate Reasoning, v o l . 15, p p . 9 3 - 1 2 2 , 1996.  [43]  Y . F. Z h e n g , 'Integration o f M u l t i p l e Sensors into a R o b o t i c S y s t e m a n d it P e r f o r m a n c e Evaluation,' in  IEEE Transactions on Robotics and Automation, v o l . 5, no. 5, O c t o b e r  1989.  100  Appendix A A ELD Architecture Classes Described herein is the E L D Architecture class structure. Also included is a description of the E L D classes and contained functions and data. The E L D Architecture has been implemented in the Microsoft Visual C++ language using Microsoft Developer Studio.  A.1 ELD Architecture Class Design Illustrated in Figure A . l is the E L D Architecture class design.  At the top is application class,  CELDApp, which is generated by Microsoft Developer Studio. This class enables the application to run under the Microsoft Windows environment.  Contained in the CELDApp class is a dialog class,  C E L D D l g , object. This class governs the application window functions and properties. It is within this class that the run-time G U I is programmed. ELDs can be accessed through the run-time GUI.  Again  Microsoft Developer Studio automatically implements the base functionality of the C E L D D l g class. Contained within the C E L D D l g class is a CArchitecture object. The CArchitecture class contains all E L D Architecture functionality and data. This includes ELDs and the connections between them. The C E L D class is the base E L D class, which contains all common E L D functionality and data. Specifically implemented ELDs are derived from this C E L D class and are contained in the CArchitecture class. The specific E L D classes, C E L D n , contain functionality and data specific to that E L D .  101  CELDApp  CELDDIg  CArchitecture  CELD1  |  Has a relationship  ^  Is a relationship  CELD  CELD2  CELDn  Figure A . l : E L D Architecture Class Diagram.  A.2  Class CArchitecture Description  Below, descriptions of the CArchitecture class functions and data is given.  Public Members: Construction and Destruction CArchitecture()  Constructs a CArchitecture object,  virtual -CArchitectureQ  Destructs a CArchitecture object.  Specific ELD Objects CELD1 m E L D l CELD1 m E L D l CELDn m ELDn  Pipe Handles for Communication to the Run-Time GUI H A N D L E m_pipeHandleUIServer; H A N D L E m_pipeHandleUIClient;  Pipe Handles for Communication to the DSP H A N D L E m_pipeHandlePCToDSP; H A N D L E m_pipeHandleDSPToPC;  102  Private members: DSP Pipe Functions. bool InitializeDSPPipes();  Initializes connections to DSP pipes.  bool InitializeUIPipeQ;  Initializes connections to run-time GUI pipes.  Architecure Building Functions bool CreateConnection(ELDs outputFrom, ELDs outputTo); outputFrom  Outputting E L D name.  outputTo  Receiving E L D name.  This function creates a connection between two ELDs.  A.3 Class C E L D Description Below, descriptions of the C E L D class functions and data is given. Public Members: Construction and Destruction CELD()  Constructs a C E L D object.  virtual ~CELD()  Destructs a C E L D object.  Functions for Data Access ELDs GetELDName()  Returns the ELDs name.  H A N D L E GetPipeHandlePCToDSP()  Returns the PC to DSP pipe handle.  H A N D L E GetPipeHandleDSPToPC()  Returns the DSP to PC pipe handle.  H A N D L E GetPipeHandleUI()  Returns the run-time GUI pipe handle.  int GetNumOutputs()  Returns the number of Outputs the E L D has.  int GetNumInputs()  Returns the number of Inputs the E L D has.  SConnection GetOutputConnections(int index)  Returns the index' ' output connection.  SConnection GetInputConnections(int index)  Returns the index' ' input connection.  1  1  Input/output Constructing Functions bool SetInputConnection(ELDs InputELDName, CString &pipeName) InputELDName  Name of the inputting E L D .  PipeName  Name of the pipe connecting the two ELDs.  This function configures an input connection. 103  bool SetOutputConnection(ELDs OutputELDName, CString pipeName) OutputELDName  Name of the outputting E L D .  PipeName  Name of the pipe connecting the two ELDs.  This function configures an output connection.  Initialization Functions  bool InitializetELDs ELDName, CString description, H A N D L E PCToDSP, H A N D L E DSPToPC, H A N D L E UI) ELDName  The name of the E L D .  Description  A description of the E L D ' s function.  PCToDSP  Handle of the PC to DSP pipe.  DSPToPC  Handle of the DSP to PC pipe.  UI  Handle of the run-time GUI pipe.  This function initializes an E L D ' s data. This is called by the CArchitecture constructor.  bool InitializeHWndtHWND hWnd)  Initialize the run-time GUI window handle. This is called by C E L D D l g .  Functions to be Defined in the Derived E L D classes  virtual bool InitializeSensor()  Specific E L D sensor initialization,  virtual bool InitializeActuator()  Specific E L D actuator initialization  virtual bool BeginThread()  Begins the E L D thread,  virtual bool InterpretMessage(SComm data)  Interprets a message received through a pipe.  Run-time G U I Display Functions  void DisplayTextfCString text)  Displays information on the Run-time G U I  void Display TextfCString text, double value) void Display TextfCString text, double value, CString text2, double value2)  Functions for Sending Messages V i a a Pipe  bool PipeMessage(ELDs recipient, ELDMessages message, HandShakingModes handshaking, float p i , float p 2 , . . . float pn, long dataAddress) recipient Message  E L D to send message to.  104  Message to send.  Handshaking  Handshaking mode to use.  Pn  n Parameter.  DataAddress  A memory address containing a data structure.  ,h  This function sends a message from one E L D to another.  Handshaking Functions bool WaitForReply(SComm &comm, ELDs fromELD, ELDMessages desiredMessage, int timeOut) comm.  The communication data.  FromELD  The E L D the data is from.  DesiredMessage  The message that is expected.  Timeout  How long to wait in ms.  This function waits for a reply from an E L D if one has been requested.  Protected Members: ELD Data ELDs m E L D N a m e  E L D Name.  CString mELDDescription  E L D function description.  H A N D L E m_pipeHandlePCToDSP  PC to DSP pipe handle.  H A N D L E m_pipeHandleDSPToPC  DSP to PC pipe handle.  H A N D L E m_pipeHandleUI  GUI pipe handle.  int mnumberlnputs  Number of E L D inputs.  int mnumberOutputs  Number of E L D outputs.  SConnection* m_inputConnections[]  Dynamically allocated array of connection data.  SConnection* m_outputConnections[]  Dynamically allocated array of connection data.  H W N D m_hWnd  GUI window handle.  Real-time NT/DSP Synchronization Data int m_syncFreq  Synchronization frequency.  _timeb msyncTime  Synchronization start time.  int mcontinuousOutputFlag  On/Off flag for continuous output.  Sensor Functions The overall sensor function.  bool SenseQ  105  bool SendOutput(ELDs requestor, float desiredAge, float timeout) requestor  The requesting E L D ' s name.  DesiredAge  The desired age of the data.  Timeout  The amount of time given to get the data.  This function activates the E L D ' s sensing capabilities and sends the data to the requestor.  N T / D S P Synchronization and Continuous Output Functions  bool SynchronizeO  Synchronizes the N T and DSP platforms.  bool StartContinuousInput(ELDs ELDInputToStart) ELDInputToStart  The E L D to send continuous output.  This function starts an input E L D sending continuous output.  bool EndContinuousInput()  Ends the continuous output.  bool SendContinuousOutput(ELDs sender, float syncFreq, long syncTimeDataAddress); sender  The name of the sending E L D .  SyncFreq  The frequency to send at.  SyncTimeDataAddress  The address of the starting sync time.  This function sends continuous output when requested.  bool ContinuousOutputReceived(UINT outputPtr, ELDs sender) outputPtr  A pointer to the output data.  Sender  E L D name of the sender.  Upon receipt of continuous output this function directs the action of the E L D .  Data fusion functions  bool Project2D(CEllipse ellipse, SNormal &normal, float projectionAngle) ellipse  The two-dimensional ellipsoid.  Normal  The one-dimensional ellipsoid.  ProjectionAngle  The angle at which to project.  This function projects a 2D ellipsoid into ID. (Not fully implemented in C++)  bool Redundant lDFusion(SNormal measurement 1, SNormal measurement2, SNormal &fused) 106  measurement]  The first one-dimensional ellipsoid.  measurement!  The second one-dimensional ellipsoid.  fused  The resulting one-dimensional ellipsoid.  This function redundantly fuses to one-dimensional ellipsoids using the H G R F method (Not implemented in C++)  bool Redundant2DFusion(CEllipse measurement 1, CEllipse measurement2, CEllipse &fused) measurement]  The first two-dimensional ellipsoid.  measurement!  The second two-dimensional ellipsoid.  fused  The resulting two-dimensional ellipsoid.  This function redundantly fuses to two-dimensional ellipsoids using the H G R F method (Not implemented in C++)  Functions that need to be defined in the specific E L D class derived from this class.  virtual bool WriteDatafUINT outputPtr, ELDs sender) outputPtr  The address to the output data structure.  Sender  The name of the sending E L D .  This function maps the data received to the E L D ' s input data structure.  virtual void* GetOutput(_timeb &bestTime) bestTime  Time that the data was taken.  This function returns the output data and the time that it was taken.  virtual bool Actuate(HandShakingModes handshaking, float setPoint) handshaking  The handshaking mode requested.  SetPoint  The new set point.  This function sends a setpoint change to an actuator.  virtual bool Getlnput()  Commands to receive input.  virtual bool FuseData()  Fuses the input data.  virtual bool ProcessData()  Processes the fused data.  virtual bool Calibrate()  Calibrates the sensor.  virtual bool SendDSPOutput()  Sends output to the DSP.  virtual bool SendDSPContinuousOutput()  Sends continuous output to the DSP.  107  A.4 Class Clmage Description This class is included as an example of a specific E L D implementation. It is derived from the C E L D class and therefore inherits all C E L D functionality and data. Functions defined as virtual in the C E L D class are redefined in the derived class. Public Members: Construction and Destruction CImage()  Constructs a C E L D object.  virtual ~CImage()  Destructs a C E L D object.  E L D Input/Output Data Structures SImageSensorlnput *m_sensorInputPtr  Sensor input data variable.  SImageSensorOutput *m_sensorOutputPtr  Sensor output data variable.  SImageRequests *m_requestPtr  Request input variable.  SQuery *m_queryPtr  Query input variable.  Functions Defined Virtually in CELD bool InitializeSensorfchar *configFileName)  Initialize camera configuration file name.  bool BeginThread()  Call this to start the E L D thread.  bool InterpretMessage(SComm data)  Message interpreter.  Private Members: Functions Defined Virtually in CELD bool WriteData(UINT outputPtr, ELDs sender) outputPtr  The address to the output data structure.  Sender  The name of the sending E L D .  This function maps the data received to the E L D ' s input data structure.  bool Getlnput()  Gets the E L D sensor input,  bool FuseData()  Fuses the sensor input data,  bool ProcessData()  Processes the fused data,  bool Calibrate()  Calibrates the sensor,  void *GetOutput(_timeb &bestTime)  Gets the output data and time,  bool SendDSPOutputQ  Sends output to the DSP.  108  Data specific to this E L D  char *m_cameraConfigFile  The name of the camera configuration file.  ITXINTRID mintracq  The frame grabber interrupt ID.  pMODCNF micpmod  Frame grabber data.  Functions specific to this E L D  void DisplayImage(HWND hWnd, short XWinPos, short YWinPos, short XWinSize, short YWinSize) HWnd  The GUI window handle.  XwinPos  The x coordinate to start the display at.  YwinPos  The y coordinate to start the display at.  XwinSize  The x size of the image to display.  YWinSize  The y size of the image to display.  This function displays an image on the run-time GUI.  A.5 Structure and Data Definitions A.5.1 ELD Messages and Names The E L D messages that can be sent between ELDs are defined as an enumerated type. A list of example messages follows. The E L D names are defined as an enumerated type as well.  enum ELDMessages Commands  SENSE = 0,  Command input to sense.  CALIBRATE,  Command input to calibrate.  THRESHOLD,  Command input to threshold.  SET_POINT,  Send a setpoint change to an actuator.  DISPLAY,  Command input to display sensor input.  SENDOUTPUT,  Command input to send output.  SEND_CONTINUOUS_OUTPUT,  Command input to send continuous output.  START_CONTINUOUS_INPUT,  Command to start sending continuous output.  ENDCONTINUOUSJNPUT,  Command to end sending continuous output.  SYNCHRONIZETIME,  Command synchronization to occur.  START_FUSION,  Command input to start fusion.  INITIALIZE,  Command input to initialize itself. 109  STATUS  Command information about input's status.  Requests QUERY_MANIPULATOR_X_POSITION_GT,  Query if manipulator's position is greater than x.  REQUESTMANIPULATORXMOTION,  Request the manipulator to move in x.  Replies DONENOERROR,  Reply that there was no error.  DONEERROR,  Reply that there was an error.  CONTINUOUSOUTPUT,  Reply that this is continuous output.  NOT_DONE,  Reply that command not completed.  OUTPUT,  Reply that this is output data.  REPLYQUERY,  Reply that this is the reply to a query.  IDLE,  Reply that the E L D is idle.  Errors UNKNOWN_ELD_NAME,  Error reply.  UNKNOWNMESSAGE,  Error reply.  MES S A G E I G N O R E D  Error reply.  TIMEOUT,  Error reply.  A.5.2  Sensor Input/Output Structures  Each E L D has an Input and Output Structure that is defined individually. A n example of the Cimage E L D output structure is included below.  struct SImageSensorOutput { B Y T E *imageRgbPtr;  R G B format image data.  B Y T E *imageYcrcbPtr;  YcrCb format image data.  W O R D xSize;  Horizontal size of image.  W O R D ySize;  Vertical size of image.  I T X D E P T H pixelSize;  Pixel data size.  float certaintyRed;  Uncertainty of the red channel.  float certaintyGreen;  Uncertainty of the green channel.  float certaintyBlue;  Uncertainty of the blue channel. Time that image was taken.  timeb *imageTimePtr; };  110  A.5.3 Communication Structure Messages in the E L D Architecture are passed through N T pipes and therefore must be packaged into structure. The structure that is used is given below.  struct SComm { ELDs sender;  Sending E L D name.  ELDs recipient;  Receiving E L D name.  ELDMessages messages;  Message sent.  HandShakingModes handshaking;  Handshaking mode requested.  float p 1;  Data.  float p2;  Data.  float p3;  Data.  float p4;  Data.  float p5;  Data.  float p6;  Data.  long dataAddress;  Input/Output data structure address.  };  Ill  Appendix B B Matlab Redundant Fusion Algorithms B.1 Redundant Fusion Functions The following redundant fusion functions have been written in Matlab. They accept input and provide output that is expressed in the same reference frame. There are three functions included below for three different measurement cases. Implementation of additional measurement cases requires modifying one of the following functions.  The redundant fusion examples displayed in Chapter 5 can be generated by  using the test suites included below.  B.1.1 Two Measurement, One-Dimensional Redundant Fusion Function The following Matlab program was used to generate the simulation results shown in Chapter 5. % T h i s f u c t i o n f u s e s two one-dimensional measurements based on Jason E l l i o t t ' s method % ( c o p y r i g h t Sept 2001) %  % I n p u t s : Meanl, V a r l , Mean2, V a r 2 . %  % Outputs: FusedMean, FusedVar : ' • % • '. % The Inputs and Output are e x p r e s s e d i n the same frame o f r e f e r e n c e as d e t e r m i n e d by , % the i n p u t s . function  [FusedMean, FusedVar] = Fusion_2M_lD(Meanl, V a r l , Mean2, Var2)  %******+************+*******+************+*****+***+********^  % Nakamura F u s i o n , VarNaka, and MeanNaka (the f u s e d mean) %****************************************^ % From Nakamura's method, determine the f u s e d c e r t a i n t y e l l i p s e assuming no s e p a r a t i o n  112  % o f means. To determine the Q m a t r i x the E l l i p s e s must be i n v a r i a n c e form (squared) '. % and e x p r e s s e d i n Frame w. QFus = [ V a r l 0; 0 V a r 2 ] ; SumJQJt = i n v ( V a r l ) + i n v ( V a r 2 ) ; Wl = i n v ( S u m J Q J t ) * i n v ( V a r l ) ; W2 = i n v ( S u m J Q J t ) * i n v ( V a r 2 ) ; W = [Wl W2]; VarNaka_w = W*QFus*W; % F i n d the f u s e d mean a c c o r d i n g t o Nakamura FusedMean = Wl*Meanl + W2*Mean2;  % Measurement  Spacing V e c t o r s , v i  vl_w = i n v ( V a r l ^ O . 5 ) * ( F u s e d M e a n - Meanl); v2_w = inv(Var2~0.5)*(FusedMean - Mean2);  ^************************************************************************************* % Determine the S e p a r a t i o n V e c t o r , K, from the v i ' s %************************************************************************************* % S e a r c h f o r the v i w i t h the l a r g e s t magnitude. K_w = a b s ( m a x ( a b s ( v l _ w ) , a b s ( v 2 _ w ) ) ) ; o , * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *  * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *  % C e r t a i n t y Scaling Matrix, C %************************************************************************************* % A p p l y the c e r t a i n t y s c a l i n g f u n c t i o n ,  fc(K),  t o the S e p a r a t i o n Parameter, K.  % The number o f measurements b e i n g f u s e d . m=2; % A p p l y the c e r t a i n t y s c a l i n g f u n c t i o n , i f K_w < 1 C_w = 1.+(-11.+7.*sqrt(m))*K_w^2+(18.-7.*sqrt(m))*K_w*3+(-8.+2.*sqrt(ra))*K_wM; else C_w = sqrt(m)*(K_w + 1 ) ; end  .•' ,'  %************************************************************************************* % Fused E l l i p s e V a r i a n c e , SigmaFus_w  (Scale SigmaNaka_w)  % A p p l y the c e r t a i n t y s c a l i n g , C_s, t o get the f u s e d FusedVar = C_w*VarNaka_w*(C_w)';  result.  B.1.2 Two Measurement, Two-Dimensional Redundant Fusion Function The following Matlab program was used to generate the simulation results shown in Chapter 5. % T h i s f u n c t i o n f u s e s two two-dimensional measurements based on Jason E l l i o t t ' s method % ( c o p y r i g h t Sept 2001) % % I n p u t s : Meanl = [Meanl_x, Meanl_y], % V a r l = [ S i g m a l _ l l , Sigmal_12; Sigmal_21, Sigmal_22], % Mean2 = [Mean2_x, Mean2_y], % Var2 = [ S i g m a 2 _ l l , Sigma2_12; Sigma2_21, Sigma2_22] %  % Outputs: FusedMean = [FusedMean_x, FusedMean_y], % FusedVar = [FusedSigmall, FusedSigmal2; FusedSigma21, FusedSigma22] % NakaVar = [NakaSigmall, NakaSigmal2; NakaSigma21, NakaSigma22] % % The Inputs and Output a r e e x p r e s s e d i n the same frame o f r e f e r e n c e as d e t e r m i n e d by % the i n p u t s . 113  function  [FusedMean, FusedVar, NakaVar] = Fusion_2M_2D(Meanl, V a r l ,  Mean2, Var2)  ^**************************************^ % Rotation Matrices,  i _ R o t _ j (the r o t a t i o n from Frame j t o Frame i )  [w_Rot_el, V a r l _ e l , notNeeded] = s v d ( V a r l ) ; [w_Rot_e2, Var2_e2, notNeeded] = s v d ( V a r 2 ) ; %********+****************+*************^  % Nakamura F u s i o n , VarNaka, and MeanNaka  (the f u s e d mean)  % From Nakamura's method, determine t h e f u s e d c e r t a i n t y e l l i p s e assuming no s e p a r a t i o n % o f means. To determine t h e Q m a t r i x ' t h e E l l i p s e s must be i n v a r i a n c e form (squared) % and e x p r e s s e d i n Frame w. QFus = [ V a r l z e r o s ( 2 , 2 ) ; zeros(2,2) V a r 2 ] ; SumJQJt = i n v ( V a r l ) + i n v ( V a r 2 ) ; WI = i n v ( S u m J Q J t ) * i n v ( V a r l ) ; W2 = i n v ( S u m J Q J t ) * i n v ( V a r 2 ) ; ' ' W = [WI W2] ; VarNaka_w = W*QFus*W'; [w_Rot_n, VarNaka_n, NakaVarAxesV] = svd(VarNaka_w); NakaVar = VarNaka_w; % T h i s i s Nakamura's u n c e r t a i n t y m a t r i x d e f i n e d i n t h e axes o f the p r i n c i p l e % components o f t h e e l l i p s e , Frame n. % F i n d t h e f u s e d mean a c c o r d i n g t o Nakamura FusedMean = Wl*Meanl + W2*Mean2; %***+*************************+********^ % Measurement Spacing vl_w v2_w  Vectors, v i  = inv(Varl 0.5)*(FusedMean - Meanl); = i n v ( V a r 2 0 . 5 ) * ( F u s e d M e a n - Mean2); A  A  ^****************+***+********** % Rotation Matrices,  w_R_vi  % Determine t h e R o t a t i o n M a t r i x from the world frame t o t h e S c a l i n g Frame ( v i ) % The f i r s t a x i s o f t h e S c a l i n g Frame, v i , i s a l o n g the v e c t o r v i . % For v l i f v l _ w ( l ) ~= 0 & vl_w(2) ~= 0 v l S c a l i n g A x i s l = vl_w/sqrt(vl_w(1) 2 + vl_w(2) 2); % The second a x i s o f t h e S c a l i n g Frame, v l , i s p e r p e n d i c u l a r t o v e c t o r v l . vlScalingAxis2 = [-vlScalingAxisl(2); vlScalingAxisl(1)]; else v l S c a l i n g A x i s l = [1; 0 ] ; % The second a x i s o f the S c a l i n g Frame, v l , i s p e r p e n d i c u l a r t o v e c t o r v l . v l S c a l i n g A x i s 2 = [0; 1 ] ; end A  A  w_Rot_vl = [ v l S c a l i n g A x i s l v l S c a l i n g A x i s 2 ] ; % For v2 if  v2_w(l) ~= 0 & v2_w(2) ~= 0 v 2 S c a l i n g A x i s l = v2_w/sqrt(v2_w(1) 2 + v 2 _ w ( 2 ) 2 ) ; % The second a x i s o f t h e S c a l i n g Frame, v2, i s p e r p e n d i c u l a r t o v e c t o r v2. v2ScalingAxis2 = [-v2ScalingAxisl(2); v 2 S c a l i n g A x i s l ( 1 ) ] ; else v 2 S c a l i n g A x i s l = [1; 0 ] ; % The second a x i s o f the S c a l i n g Frame, v l , i s p e r p e n d i c u l a r t o v e c t o r v l . v end 2 S c a l i n g A x i s 2 = [0; 1 ] ; A  A  114  w_Rot_v2 =  [v2ScalingAxisl v2ScalingAxis2];  .  -  ,  % T r a n s f o r m the S e p a r a t i o n Parameters t o the S c a l i n g Frames ( v i ) v l _ v l = w_Rot_vl'*vl_w; v2_v2 = w_Rot_v2'*v2_w; I* ************************** * * * ************** * * * ************** * * * * ******************** % Determine the S e p a r a t i o n V e c t o r , K, from the v i ' s %************************************************************************************* % S e a r c h f o r the v i w i t h the l a r g e s t magnitude, i f v l _ v l ( l ) >= v 2 _ v 2 ( l ) v L a r g e = 1; else v L a r g e = 2; end i f v L a r g e == 1 % Determine the l a r g e s t second component by t r a n s f o r m i n g v2 and v3 i n t o frame v l . v 2 _ v l = w_Rot_vl'*w_Rot_v2*v2_v2; w_Rot_s = w_Rot_vl; K= [ v l _ v l ( l ) ; v2_vl(2)]; else % Determine the l a r g e s t second component by t r a n s f o r m i n g v l and v3 i n t o frame v2. v l _ v 2 = w_Rot_v2 ' *w_Rot_vl*vl_vl.; w_Rot_s = w_Rot_v2; K= [v2_v2(l); vl_v2(2)]; end ^************************************************************************************* % C e r t a i n t y Scaling Matrix, C %************************************************************************************* % A p p l y t h e c e r t a i n t y s c a l i n g f u n c t i o n , f c ( K ) , t o the S e p a r a t i o n Parameter, K, i n t h e % S c a l i n g Frame, ( s ) , which i s the frame o f v L a r g e . % D e f i n e t h e C e r t a i n t y S c a l i n g M a t r i x , C, i n Frame s. C_s = z e r o s (2,2),• % The s i g n o f K i s not important as i t s c a l e s the m a t r i x so take the a b s o l u t e v a l u e . A % n e g a t i v e s c a l i n g i s a f l i p but the e l l i p s e i s s y m e t r i c a l so a f l i p changes nothing." Kpos = a b s ( K ) ; % The number o f measurements b e i n g f u s e d . m=2 ; % A p p l y the c e r t a i n t y s c a l i n g f u n c t i o n , f o r i = 1:2 i f Kpos (i) < 1 C _ s ( i , i ) = 1.+(-11.+7.*sqrt(m))*Kpos(i) 2+(18.-7.*sqrt(m))*Kpos(i) 3+ (-8.+2.*sqrt(m))*Kpos(i) 4; else C _ s ( i , i ) = sqrt(m)*(Kpos(i) + 1); end end A  A  A  ^************************************************************************************* % Fused E l l i p s e V a r i a n c e , FusedVar_w (Scale NakaVar_w) %************************************************************************************* % Determine Nakamura's c e r t a i n t y e l l i p s e e x p r e s s e d i n Frame k. VarNaka_s = w_Rot_s'*VarNaka_w*w_Rot_s; % A p p l y the c e r t a i n t y s c a l i n g , C_s, t o get the f u s e d r e s u l t . FusedVar = w Rot s*C s*VarNaka s*(C s)'*w Rot s';  115  B.1.3 Three Measurement, Two-Dimensional Redundant Fusion Function The following Matlab program was used to generate the simulation results shown in Chapter 5. % T h i s f u n c t i o n f u s e s t h r e e two-dimensional measurements based on Jason % E l l i o t t ' s method ( c o p y r i g h t Sept 2001) Meanl = Varl = Mean2 = Var2 = Mean3 = Var3 =  [Meanl x, Meanl y' i [Sigmal _ H Sigmal 12; Sigmal [Mean2" x, Mean2 y; t [Sigma2 _11 Sigma2 Sigma2 [Mean3 x, Mean3 y; r [Sigma3 _11 Sigma3 12; Sigma3  21,  Sigmal 22  21,  Sigma2 22  21,  Sigma3 22  % Outputs: FusedMean = [FusedMean_x, FusedMean_yJ, . % FusedVar = [FusedSigmall, FusedSigmal2; FusedSigma21, FusedSigma22] % % The Inputs and Output a r e e x p r e s s e d i n the same frame o f r e f e r e n c e as d e t e r m i n e d by % the inputs. f u n c t i o n [FusedMean, FusedVar, NakaVar] = Fusion_3M_2D(Meanl, V a r l , Mean2, Var2, Mean3, Var3)  % R o t a t i o n M a t r i c e s , i _ R o t _ j (the r o t a t i o n from Frame j t o Frame i ) %***+*****************+********************  •  [w_Rot_el, V a r l _ e l , notNeeded] = s v d ( V a r l ) ; . [w_Rot_e2, Var2_e2, notNeeded] = s v d ( V a r 2 ) ; [w_Rot_e3, Var2_e3, notNeeded] = s v d ( V a r 3 ) ; ^**********************+**********************+*  % Nakamura F u s i o n , VarNaka, and MeanNaka (the f u s e d mean) %***************************************^ % From Nakamura's method, determine t h e f u s e d c e r t a i n t y e l l i p s e assuming no s e p a r a t i o n % o f means. To determine the Q m a t r i x t h e E l l i p s e s must be i n v a r i a n c e form (squared) % and e x p r e s s e d i n Frame w. QFus = [ V a r l zeros(2,2) z e r o s ( 2 , 2 ) ; zeros(2,2) V a r 2 zeros (2,2); z e r o s ( 2 , 2 ) z e r o s ( 2 , 2 ) Var3]; SumJQJt = i n v ( V a r l ) + i n v ( V a r 2 ) + i n v ( V a r 3 ) ; Wl = i n v ( S u m J Q J t ) * i n v ( V a r l ) ; W2 = i n v ( S u m J Q J t ) * i n v ( V a r 2 ) ; W3 = i n v ( S u m J Q J t ) * i n v ( V a r 3 ) ; W = [Wl W2 W3]; VarNaka_w = W*QFus*W'; [w_Rot_n, VarNaka_n, NakaVarAxesV] = svd(VarNaka_w); NakaVar = VarNaka_w; % T h i s i s Nakamura's u n c e r t a i n t y m a t r i x d e f i n e d i n t h e axes o f t h e p r i n c i p l e % components o f t h e e l l i p s e , Frame n. % F i n d t h e f u s e d mean a c c o r d i n g t o Nakamura FusedMean = Wl*Meanl + W2*Mean2 + W3*Mean3;  -  % Measurement Spacing V e c t o r s , v i  %+*****+******************+************************+***  ^*.*  vl_w = i n v ( V a r l 0 . 5 ) * ( F u s e d M e a n - Meanl); v2_w = i n v (Var2 0. 5) * (FusedMean - Mean2); v3_w = i n v (VarS-'O. 5) * (FusedMean - Mean3) ; A  /S  ^**********+***********+************+********+*********  % R o t a t i o n M a t r i c e s , w_R_vi  . .  116  ,  ;  % Determine the R o t a t i o n M a t r i x from the w o r l d frame t o the S c a l i n g Frame ( v i ) % The f i r s t a x i s o f the S c a l i n g Frame, v i , i s a l o n g the v e c t o r v i . % For v l i f v l _ w ( l ) ~= 0 & vl_w(2) ~= 0 vlScalingAxisl = vl_w/sqrt(vl_w(1) 2 + vl_w(2) 2); % The second a x i s o f the S c a l i n g Frame, v l , i s p e r p e n d i c u l a r vlScalingAxis2 = [-vlScalingAxisl(2); vlScalingAxisl(1)]; else v l S c a l i n g A x i s l = [1; 0] ; % The second a x i s o f the S c a l i n g Frame, v l , i s p e r p e n d i c u l a r v l S c a l i n g A x i s 2 = [0; 1 ] ; end A  w_Rot_vl =  A  to vector v l .  [vlScalingAxisl vlScalingAxis2];  % For v2 i f v2_w(l) ~= 0 & v2_w(2) ~= 0 v 2 S c a l i n g A x i s l = v2_w/sqrt(v2_w(1) 2 + v 2 _ w ( 2 ) 2 ) ; % The second a x i s o f the S c a l i n g Frame, v2, i s p e r p e n d i c u l a r v2ScalingAxis2 = [-v2ScalingAxisl(2); v2ScalingAxisl(1)]; else v 2 S c a l i n g A x i s l = [1; 0 ] ; % The second a x i s o f the S c a l i n g Frame, v l , i s p e r p e n d i c u l a r v 2 S c a l i n g A x i s 2 = [0; 1 ] ; end A  w_Rot_v2 =  to vector v l .  [v2ScalingAxisl  A  to vector  v2.  to vector v l .  v2ScalingAxis2];  % For v3 i f v3_w(l) ~= 0 & v3_w(2) ~= 0 v 3 S c a l i n g A x i s l = v3_w/sqrt(v3_w(1) 2 + v 3 _ w ( 2 ) 2 ) ; % The second a x i s o f the S c a l i n g Frame, v3, i s p e r p e n d i c u l a r t o v e c t o r v3. v3ScalingAxis2 = [-v3ScalingAxisl(2); v3ScalingAxisl(1)]; else v 3 S c a l i n g A x i s l = [1; 01; % The second a x i s o f the S c a l i n g Frame, v l , i s p e r p e n d i c u l a r t o v e c t o r v l . v 3 S c a l i n g A x i s 2 = [0; 1 ] ; end w_Rot_v3 = [ v 3 S c a l i n g A x i s l v3ScalingAxis2]; % T r a n s f o r m the S e p a r a t i o n Parameters t o the S c a l i n g Frames ( v i ) v l _ v l = w_Rot_vl'*vl_w; v2_v2 = w_Rot_v2'*v2_w; v3_v3 = w_Rot_v3'*v3_w; %******************************************^ A  % Determine the S e p a r a t i o n  V e c t o r , K,  A  from the v i ' s  % Search f o r the v i w i t h the l a r g e s t magnitude ( c a l l e d v L a r g e ) . i f ( v l _ v l ( l ) >=v2_v2(l)) & ( v l _ v l ( l ) >=v3_v3(l)) vLarge = 1; e l s e i f (v2_v2(l) > = v l _ v l ( l ) ) & (v2_v2(l) > = v 3 _ v 3 ( l ) ) vLarge = 2; else v L a r g e = 3; end i f vLarge == 1 % Determine the l a r g e s t second component by t r a n s f o r m i n g v2_vl = w_Rot_vl'*w_Rot_v2*v2_v2; v3_vl = w_Rot_vl'*w_Rot_v3*v3_v3; w_Rot_s = w_Rot_vl; i f abs(v2_vl(2)) > abs(v3_vl(2)) K = [ v l v l (1) ; v2 v l (2) ] ; 117  v2 and v3 i n t o  frame  else K = [ v l _ v l ( l ) ; v3_vl(2)]; end e l s e i f vLarge == 2 % Determine the largest second component by transforming v l and v3 into frame v2. vl_v2 = w_Rot_v2'*w_Rot_vl*vl_vl; v3_v2 = w_Rot_v2'*w_Rot_v3*v3_v3; w_Rot_s = w_Rot_v2; i f abs(vl_v2(2)) > abs(v3_v2(2)) K = [v2_v2(l); vl_v2(2)]; else K = [v2_v2(l); v3_v2(2)]; end else % Determine the largest second component by transforming v l and v2 into frame v3. vl_v3 = w_Rot_v3'*w_Rot_vl*vl_vl; v2_v3 = w_Rot_v3'*w_Rot_v2*v2jv2; w_Rot_s = w_Rot_v3; i f abs(vl_v3(2)) > abs(v2_v3(2)) ... K = [v3_v3(l); vl_v3(2)]; else K = [v3_v3(l); v2_v3(2)]; end end % Certainty Scaling Matrix, C %*********************************^ % Apply the certainty scaling function, fc(K),'to the Separation Parameter, K, i n the % Scaling Frame, (s), which i s the frame of vLarge. % Define the Certainty Scaling Matrix, C, i n Frame k. C_s = zeros (2,2); % The sign of K i s not important as i t scales the matrix so take the absolute value. A % negative scaling i s a f l i p but the e l l i p s e i s symetrical so a f l i p changes nothing. Kpos = abs(K); % The number of measurements being fused. m=3 ; % Apply the certainty scaling function, for i = 1:2 i f Kpos(i) < 1 C _ s ( i , i ) = 1.+(-11.+7.*sqrt(m))*Kpos(i) 2+(18.-7.*sqrt(m))*Kpos(i) 3+ (-8.+2.*sqrt(m))*Kpos(i) 4; else C _ s ( i , i ) = sqrt(m)*(Kpos(i) + 1); end end A  A  A  **********+*+*+************+*********+*********^  % Fused E l l i p s e Variance, SigmaE'us_w (Scale SigmaNaka_w) %*********************************+*********+*******^ % Determine Nakamura's certainty e l l i p s e expressed i n Frame s. VarNaka_s = w_Rot_s'*VarNaka_w*w_Rot_s; % Apply the certainty scaling, C_s, to get the fused result. FusedVar = w Rot s*C s*VarNaka s*(C s)'*w Rot s';  118  ' .  B.2 Test Suites B.2.1  Two Measurement, One-Dimensional Redundant Fusion Test Suite  The following is the Matlab code that was used to generate the two one-dimensional measurement redundant fusion simulation examples in Chapter 5. % T h i s f i l e c o n t a i n s a t e s t s u i t e f o r f u s i n g two o n e - d i m e n s i o n a l measurements. % C o p y r i g h t Sept 2001 % W r i t t e n by Jason E l l i o t t clear; % S p e c i f y the means and v a r i a n c e s o f the measurements ([meanl; mean2], [ v a r l ; v a r 2 ] ) . Mean = [ [ 5 ; 5 ] , [5;5.5], [5;6], [5;6.5], [5;7], [5;10], [5;8], [5;8], [5;8], [ 5 ; 8 ] ] ; Var = [ [ 1 ; 1 ] , [1,-1], [1;1], [1;1], [1;1], [1;1], [1;1], [1;2], [1;4], [ 1 ; 1 0 ] ] ; % Fuse the measurements, for i=l:length(Mean(1,:)) [FusedMean(i), F u s e d V a r ( i ) ] = Fusion_2M_lD(Mean(1,i), V a r ( l , i ) , Var(2,i)); NakaVar(i) = 1 / ( ( 1 / V a r ( 1 , i ) ) + ( 1 / V a r ( 2 , i ) ) ) ; end  Mean ( 2 , i ) ,  % P l o t the r e s u l t s , x = [0: .1:17] ; for i=l:length(Mean(1,:)) figure; h o l d on; measl = (1/sqrt(2*3.14*Var(1,i)))*exp(-((x-Mean(1,i)). 2/(2*(Var(1,i))))); meas2 = (1/sqrt(2*3.14*Var(2,i)))*exp(-((x-Mean(2,i))."2/(2*(Var(2,i))))); fus = (1/sqrt(2*3.14*FusedVar(i)))*exp(-(x-FusedMean(i)). 2/(2*FusedVar(i))); Naka = (1/sqrt(2*3.14*NakaVar(i)))*exp(-(x-FusedMean(i))."2/(2*NakaVar(i))); p l o t ( x , measl, ' k — ' ) ; p l o t ( x , meas2, ' k — ' ) ; plot(x, fus, 'k-'); p l o t ( x , Naka, ' k : ' ) ; hold o f f ; end  : '  A  A  B.2.2 Two Measurement, Two-Dimensional Redundant Fusion Test Suite The following is the Matlab code that was used to generate the two two-dimensional measurement redundant fusion simulation examples in Chapter 5. % T h i s f i l e c o n t a i n s a t e s t s u i t e f o r f u s i n g two two-dimensional measurements. % C o p y r i g h t Sept 2001 % W r i t t e n by Jason E l l i o t t clear; % S p e c i f y the means and v a r i a n c e s o f the measurements ([mean_x; mean_y], % [var_x, v a r _ x y ; var_yx, v a r _ y ] ) . Meanl = [ [ 5 ; 5 ] , [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [ 5 ; 5 ] ] ; Mean2 = [ [ 5 ; 5 ] , [6;5], [7;5], [10;5], [5.707;5.707], [ 6.414;6.414], [5+12.5"0.5; 5+12.5 0.5], [7;4], [6.5;7], [8;9], [ 5 ; 5 . 5 ] ] ; V a r l = [[1 0;0 1 ] , [1 0;0 1], [1 0;0 1], [1 0;0 1 ] , [1 0;0 1 ] , [1 0;0 1], [1 0;0 1 ] , [1 1; 1 9 ] , [4 1; 1 1 ] , [9 1. 5; 1. 5 ,1 ] , [9 1. 5; 1. 5 1 ] ] ; Var2 = [[1 0;0 1 ] , [1 0;0 1], [1 0;0 1 ] , [1 0;0 1 ] , [1 0;0 1 ] , [1 0;0 1 ] , [1 0;0 1 ] , [1 0.1; 0.1 4 ] , [1 1;1 8 ] , [1 1;1 4 ] , [1 1;1 4 ] ] ; A  119  % Fuse the measurements, for i=l:length(Meanl(1,:)) [FusedMean(1:2,i), FusedVar(1:2, ( 2 * i ) - 1 : 2 * i ) , NakaVar(1:2, ( 2 * i ) - 1 : 2 * i ) ] = Fusion_2M_2D(Meanl(1:2,i), V a r l ( 1 : 2 , ( 2 * i ) - l : 2 * i ) , Mean2(1:2,i), Var2(1:2, ( 2 * i ) - l : 2 * i ) ) ; end % Plot the r e s u l t s . f o r i = l : l e n g t h ( M e a n l ( 1 , :) ) % Determine the p r i n c i p l e axes f o r p l o t t i n g purposes [w_Rot_el, V a r l _ e l , notneeded] = svd(Varl(1:2,(2*i)-1:2*i)); [w_Rot_e2, Var2_e2, notneeded] = svd(Var2(1:2,(2 + i ) - 1 : 2 * i ) ) ; [w_Rot_f, FusedVar_f, notneeded] = s v d ( F u s e d V a r ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) ) ; [w_Rot_n, NakaVar_n, notneeded] = s v d ( N a k a V a r ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) ) ;  0 . * * * * * * * * * * * * * * * * * *  +  * * * * * * * * * * * * * ^  % P l o t t i n g t h e E l l i p s e ' s means and p r i n c i p l e %***************+*+***************^  axes  figure; h o l d on; % NOTE: The p l o t t i n g o f a l l e l l i p s e s are not shown here. The o t h e r s a r e p l o t t e d % similarily. % P l o t t h e c e n t e r o f Nakamura's e l l i p s e p l o t ( F u s e d M e a n ( 1 , i ) , FusedMean(2,i), 'kh'); % P l o t t h e e l i p s e p r i n c i p l e a x i s r e s u l t i n g from the s v d f o r t h e f u s e d r e s u l t , plot((FusedMean(1,i)+FusedVar_f(1,1) 0.5*w_Rot_f(1,1)), (FusedMean(2,i)+FusedVar_f(1,1) 0.5*w_Rot_f(2,1)),'ko'); plot((FusedMean(1,i)-FusedVar_f(1,1) 0.5*w_Rot_f(1,1)), (FusedMean(2,i)-FusedVar_f(1,1) 0.5*w_Rot_f(2, 1)) , 'ko ' ) ; plot((FusedMean(1,i)+FusedVar_f(2,2) 0.5*w_Rot_f(1,2)), (FusedMean(2,i)+FusedVar_f(2,2) 0 .5*w_Rot_f(2, 2 ) ) , ' k o ' ) ; plot((FusedMean(1,i)-FusedVar_f(2,2) 0.5*w_Rot_f(1,2)), ( F u s e d M e a n ( 2 , i ) - F u s e d V a r _ f ( 2 , 2 ) 0 . 5*w__Rot_f (2, 2 ) ) , ' k o ' ) ; A  A  A  A  A  A  A  A  ^*********************************^ % Plot  t h e e l l i p s e curves  % P l o t t i n g t h e F i n a l Fused e l l i p s e i n the xy p l a n w i t h a r o t a t i o n % Determine t h e angle o f r o t a t i o n x2 = [FusedMean(1,i)-FusedVar_f(1,1) 0.5:.05:FusedMean(1, i)+FusedVar_f(1,1) 0.5] i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)>=0) RotAngle = a t a n (w_Rot_f (2, 1)/w_Rot__f (1, 1) ) ; e l s e i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); e l s e i f (w_Rot_f(2,1)<=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); else RotAngle = atan(w_Rot_f(2,1)/w_Rot_f(1,1)); end RotAngle = -RotAngle; Si = sin(RotAngle); C = cos(RotAngle); a = FusedVar_f(1,1) 0.5; b = FusedVar_f(2,2) 0.5; X= FusedMean(1,i); Y= FusedMean(2,i); A  A  A  A  % From Maple c a l c u l a t i o n y P o s i t i v e = (-Si*b 2*C*X+C*a 2*Si*X+a 2*Y*C 2+b 2*Y-b 2*Y*C 2C*a 2*Si*x2+Si*b 2*C*x2-(b 2*a 2*(a 2*C 2+2*X*x2-X 2-x2. 2+b 2A  A  A  A  A  A  A  A  A  120  A  A  A  A  A  A  A  b 2 * C 2 ) ) . (1/2) ) / ( b 2 - b 2 * C 2 + a 2 * C 2 ) ; y N e g a t i v e = (-Si*b 2*C*X+C*a 2*Si*X+a 2*Y*C 2+b 2*Y-b 2*Y*C 2C*a 2*Si*x2+Si*b 2*C*x2+(b 2*a 2*(a 2*C 2+2*X*x2-X 2-x2. 2+b 2b 2*C 2)) . (1/2))/(b 2-b 2*C 2+a 2*C 2); A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  % Plot the e l l i p s e plot(x2, yPositive, p l o t ( x 2 , yNegative,  A  A  A  A  A  A  A  A  A  A  A  A  A  A  'k-'); 'k-');  hold o f f ; end  B.2.3 Three Measurement, Two-Dimensional Redundant Fusion Test Suite The following is the Matlab code that was used to generate the three two-dimensional measurement redundant fusion simulation examples in Chapter 5. % W r i t t e n by Jason E l l i o t t C o p y r i g h t Oct 2001 clear; % S p e c i f y t h e means and v a r i a n c e s o f the measurements ([mean_x; % [var_x, var_xy; var_yx, v a r _ y ] ) . Meanl = [ [ 5 ; 5 ] , [3;3], [5;5], [ 5 ; 5 ] ] ; Mean2 = [[4.5;5.1], [6;5.5], [7;5], [5.4;5.2]]; Mean3 = [[5.5;5.5], [6.3;5], [5;7], [4.9;5.7]]; V a r l = [[1 0;0 1 ] , [1 0;0 1 ] , [1 0;0 1 ] , [1 1;1 6 ] ] ; Var2 = [[1 0;0 1 ] , [2 1.3/1.3 3 ] , [1 0;0 1 ] , [3 .7;.7 1.5]]; Var3 = [[1 0;0 1 ] , [1 0;0 9 ] , [1 0;0 1 ] , [4 0;0 2 ] ] ;  mean_y],  % Fuse t h e measurements, for i=l:length(Meanl(1,:)) [FusedMean(1:2,i), FusedVar(1:2, ( 2 * i ) - 1 : 2 * i ) , NakaVar(1:2, ( 2 * i ) - 1 : 2 * i ) ] = Fusion_3M_2D(Meanl(1:2,i), V a r l ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) , Mean2(1:2,i), Var2 (1:2, ( 2 * i ) - l : 2 * i ) , Mean3(1:2,i), Var3(1:2, ( 2 * i ) - 1 : 2 * i ) ) ; end % Plot the r e s u l t s . for i=l:length(Meanl(1,:)) % Determine t h e p r i n c i p l e axes f o r p l o t t i n g purposes [w_Rot_el, V a r l _ e l , notneeded] = s v d ( V a r l ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) ) ; [w_Rot_e2, Var2_e2, notneeded] = s v d ( V a r 2 ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) ) ; [w_Rot_e3, Var3_e3, notneeded] = s v d ( V a r 3 ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) ) ; [w_Rot_f, FusedVar_f, notneeded] = svd(FusedVar(1:2, ( 2 * i ) - 1 : 2 * i ) ) ; [w_Rot_n, NakaVar_n, notneeded] = s v d ( N a k a V a r ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) ) ; * * ****************  % P l o t t i n g t h e E l l i p s e ' s means and p r i n c i p l e axes %************************************************************************* figure; h o l d on; % Plot the center of e l l i p s e l p l o t ( M e a n l ( 1 , i ) , M e a n l ( 2 , i ) , 'k+'); % P l o t e l l i p s e l ' s p r i n c i p l e axes plot((Meanl(1,i)+Varl_el(1,1) 0.5*w_Rot_el(1,1)),(Meanl(2,i)+Varl_el(1,1) 0.5*w_Rot_el(2,l)),'ko'); plot((Meanl(1,i)-Varl_el(1,1) 0.5*w_Rot_el(1,1)),(Meanl(2,i)Varl_el(1,1) 0.5*w_Rot_el(2,1)),'ko'); plot((Meanl(1,i)+Varl_el(2,2) 0.5*w_Rot_el(1,2)),(Meanl(2,i)+Varl_el(2,2) 0.5*w_Rot_el(2,2)),'ko'); plot((Meanl(1,i)-Varl_el(2,2) 0.5*w_Rot_el(1,2)),(Meanl(2,i)V a r l e l ( 2 , 2 ) 0 . 5 * w Rot e l ( 2 , 2 ) ) , ' k o ' ) ; A  A  A  A  A  A  A  121  A  % P l o t the e l l i p s e  curves  % P l o t t i n g the F i n a l Fused e l l i p s e i n the xy p l a n w i t h a r o t a t i o n % determine the a n g l e o f r o t a t i o n x2 = [FusedMean(1,i) - F u s e d V a r _ f ( 1 , 1 ) 0 . 5 : .05 : FusedMean(1,i) + F u s e d V a r _ f ( 1 , 1 ) 0 . 5] ; i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)>=0) RotAngle = a t a n ( w _ R o t _ f ( 2 , 1 ) / w _ R o t _ f ( 1 , 1 ) ) ; e l s e i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); e l s e i f (w_Rot_f(2,1)<=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); else RotAngle = a t a n ( w _ R o t _ f ( 2 , 1 ) / w _ R o t _ f ( 1 , 1 ) ) ; end RotAngle = -RotAngle; Si = sin(RotAngle); C = cos(RotAngle); a = FusedVar_f(1,1) 0.5; b = FusedVar_f(2,2) 0.5; X= FusedMean(1,i); Y= FusedMean(2,i); A  A  A  A  % From Maple c a l c u l a t i o n y P o s i t i v e = (-Si*b 2*C*X+C*a 2*Si*X+a 2*Y*C 2+b 2*Y-b 2*Y*C 2C*a 2*Si*x2+Si*b 2*C*x2-(b 2*a 2*(a 2*C 2+2*X*x2-X 2-x2. 2+b 2b 2*C 2)) . (1/2))/(b 2-b 2*C 2+a 2*C 2); y N e g a t i v e = (-Si*b 2*C*X+C*a 2*Si*X+a 2*Y*C 2+b 2*Y-b 2*Y*C 2C*a 2*Si*x2+Si*b 2*C*x2+(b 2*a 2*(a 2*C 2+2*X*x2-X 2-x2. 2+b 2b 2*C 2)) . (1/2))/(b 2-b 2*C 2+a 2*C 2); A  A  A  A  A  A  A  A  A  A  A  A  A  % P l o t the e l l i p s e plot(x2, yPositive, p l o t ( x 2 , yNegative,  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  A  'k-'); 'k-');  % NOTE: The code used t o p l o t a l l the e l l i p s e s i s not shown. % s i m i l a r t o what i s shown above, hold o f f ; end  122  It i s  Appendix C C Calibration Data C.1 Reference Frame Calibration The following table includes the measurement data collected when determining the offset between the established world reference frame and the sensor's frame of reference. Measurement 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Mean Variance  _^  Pointer Locator X Offset (mm) Y Offset (mm)  Encoder X Offset (mm)  Ultrasound Y Offset (mm)  94.38 94.49 94.50 94.38 94.41 94.49 94.53 94.50 94.50 94.49 94.45 94.46 94.44 94.41 94.47 94.52 94.52 94.50 94.51 94.43  177.57 178.50 178.56 178.49 178.41 178.21 178.26 177.83 177.97 177.98 177.80 177.92 177.80 177.95 178.05 177.88 177.83 177.83 177.72 177.67  132.25 132.17 132.20 132.17 132.15 132.20 132.22 132.07 132.14 132.10 132.10 132.05 132.10 132.05 132.10 132.14 132.02 132.07 132.15 132.10  207.40 208.52 208.55 207.38 207.40 207.99 207.96 207.40 208.55 208.55 207.38 208.55 207.99 207.38 207.38 208.55 207.48 207.53 207.48 206.87  94.47 0.0022  178.01 0.0863  132.13 0.0040  207.81 0.3013  T a b l e d : Calibration Data.  123  The accuracy of the measuring devices used is included in the table below. Comparison of these figures with the variances of the XY Table sensors reveals that these measuring devices contribute very little uncertainty to the quantification of the sensor variances. Measuring Device  Accuracy  Verniers Dial Guage Height Guage  +/- 0.02 mm +/- 0.001 in. +/- 0.001 in.  Table C.2: Measuring Device Accuracies.  C.2 S e n s o r Measurement Variances Table C.3 includes the data collected from the repeated measurement trials used to establish the variance of the encoder, ultrasound and pointer locator sensors. The variance was determined by comparing the measurements to the true value established by dial guage measurements.  Position 1 2 3 4 5 6 Variance  True Value ((Dial Guage) X (mm) Y (mm) 223.1961 309.1038 243.7701 329.8048 303.2061 309.4848 304.9333 254.9002 245.7767 277.7094 214.0013 300.4424 N/A N/A.  Pointer Locator X (mm) Y (mm) 220.7125 307.5083 244.0380 330.9971 305.8111 306.5793 306.7383 256.0492 246.4088 275.5989 211.0839 300.6071 4.1990 3.0350  Table C.3: Sensor Variance Data.  124  Encoder X (mm) 220.1519 242.8905 304.4156 304.8347 245.5668 210.9690 3.4587  Ultrasound Y (mm) 307.8867 331.7732 306.0959 254.4869 275.6808 301.4896 3.7039  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0090205/manifest

Comment

Related Items