Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Multisensor fusion within an Encapsulated Logical Devices Architecture Elliott, Jason Douglas 2001

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2002-0072.pdf [ 15.71MB ]
Metadata
JSON: 831-1.0090205.json
JSON-LD: 831-1.0090205-ld.json
RDF/XML (Pretty): 831-1.0090205-rdf.xml
RDF/JSON: 831-1.0090205-rdf.json
Turtle: 831-1.0090205-turtle.txt
N-Triples: 831-1.0090205-rdf-ntriples.txt
Original Record: 831-1.0090205-source.json
Full Text
831-1.0090205-fulltext.txt
Citation
831-1.0090205.ris

Full Text

Multisensor Fusion within an Encapsulated Logical Device Architecture by Jason Douglas Elliott BASc, University of Waterloo, 1999 A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE F A C U L T Y OF G R A D U A T E STUDIES (Department of Mechanical Engineering) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH C O L U M B I A November 2001 © Jason Douglas Elliott, 2001 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Nl^HActOLC-AC- & ^ 6 i ^ S £ ^ i M The University of British Columbia Vancouver, Canada Date K W 1-2 ^Loo \ . DE-6 (2/88) A b s t r a c t This work is concerned with increasing the efficiency of the implementation, maintainability and operational reliability of automated workcells. These systems consist of a locally contained set of sensors and actuators that are integrated to perform a set of automated tasks. The specific contribution of this work includes the specification of the Heuristic-based Geometric Redundant Fusion (HGRF) method and discussions regarding the fusion of specific classes of complementary sensor measurements. Further, the Encapsulated Logical Device (ELD) Architecture is presented as a framework that facilitates, the systematic and efficient implementation of automated workcells. This architecture incorporates the fusion mechanisms proposed in this work. The HGRF method is an extension of the Geometric Redundant Fusion (GRF) method and is capable of fusing m redundant measurements in n-dimensional space. Unlike the GRF method, the uncertainty of the HGRF method's result increases as the level of disparity of the measurements increases. This ensures a realistic estimate of the uncertainty of the data even when unexpected sensor errors occur and when a small amount of data is available, as is common with automation workcells. The uncertainty ellipsoid representation, based on the Gaussian distribution, is utilized by this method. This limits the application. of this method to linear measurement spaces, unless a linear approximation to a non-linear measurement space is acceptable. This work also investigates the fusion of three classes of complementary sensor data. The dimensional extension function is useful for combining measurements taken of the same feature in non-overlapping linear measurement spaces. The uncertainty modification mechanism is applicable when a sensor's measurement can be used to modify the uncertainty of another sensor's measurement. The range enhancement mechanism is useful for combining different measurements taken in overlapping measurement spaces over different ranges of that space. Additionally, the projection method is a tool useful for manipulating the dimensionality of uncertain sensor data by projecting an ellipsoid into a lower dimension. The E L D Architecture allows systematic and efficient implementation of automation workcells across multiple hardware and software platforms. The E L D Architecture is based upon the Logical Sensor (LS) . paradigm and includes sensor and actuator functionality. Within the E L D the uncertainty of all sensor data is quantified and manipulated using the sensor fusion mechanisms detailed in this work. Consideration of the uncertainty of sensor data will enhance the operational reliability of industrial automation workcells. ii Contents Abstract ii Table of Contents iii List of Figures vii List of Tables x, Acknowledgements xi 1 Introduction 1 1.1 The State of Automation in Industry 1 1.2 Systematic Design and its Benefits . .4 1.3 Towards Robustly Operating Automation Systems 4 1.4 Project Scope and Objectives .6 1.5 Outline 6 2 Literature Review 8 2.1 Sensor, Actuator and Control Architectures 8 2.1.1 Autonomous Robotic System Architectures .......8 2.1.2 Real-Time Control Architectures .9 2.1.3 Sensor Integration and Fusion Architectures.. .........10 2.2 Sensor Fusion and Integration .13 2.2.1 Definitions and Concepts ........13 2.2.2 Bayesian and Dempster-Shafer Representations 14 2.2.3 Possibility Theory and Fuzzy Logic 14 2.2.4 Geometric Ellipsoid Representation 15 2.2.5 Uncertainty Ellipsoid Derivation 17 2.3 Approach and Limitations .19 iii 3 Uncertain Data Fusion 20 3.1 Redundant Data Fus ion 21 3.1.1 Some Propert ies o f Uncertainty E l l i pso ids 21 3.1.2 Geometr i c Redundant Fus ion ( G R F ) 23 3.1.3 Heur is t i c Spec i f i cat ion 26 3.1.4 Heur is t i c Based Geometr i c Redundant Fus ion ( H G R F ) 28 3.1.5 Numer i ca l Examp le 38 3.2 Complementary Data Fus ion 40 3.2.1 D imens iona l Extens ion 41 3.2.2 Uncerta inty Mod i f i c a t i on ....43 3.2.3 Range Enhancement • 44 3.3 Project ion • 46 3.4 The E l l i p so i d Representation and Non- l inear Transformat ions 50 3.5 Summary • . . . . . . . . .52 4 The Encapsulated Logical Device Architecture 53 4.1 The Encapsulated Log i c a l Dev i ce 53 4.1.1 Spec i f icat ions • 53 4.1.2 E L D Components . 5 5 4.1.3 The Sensor .56 4.1.4 The Manager 59 4.1.5 The Actuator 61 4.1.6 The Execut ing Thread . . 6 1 4.2 The E L D Archi tecture • 62 4.2.1 Spec i f icat ions • 62 4.2.2 E L D Archi tecture Components 63 4.3 Cross-P la t form Implementat ion 63 4.4 E L D Arch i tecture Bu i l de r . . 6 3 4.5 The Structure o f the E L D Archi tecture Product Deve lopment 64 4.6 Summary ; • • • • ....... 65 5 Redundant Fusion Simulation Results 65 5.1 Fus i on o f T w o One-D imens iona l Measurements . . . . . . .65 5.1.1 D i scuss ion o f Results 68 5.2 Fus i on o f T w o Two-D imens i ona l Measurements ..69 5.3 Fus i on o f Three Two-Dimensional Measurements .73 5.4 Summary o f S imu la t ion Results . 7 6 6 Application Example. A Two-Degree-of-Frecdom Robot 77 6.1 Exper imenta l Setup 77 6.1.1 Phys i ca l System •. . . .77 6.1.2 Implementat ion P lat forms 80 6.1.3 E L D Archi tecture Des ign 81 6.1.4 Graph i ca l Use r Interface. 85 6.2 Ca l ib ra t ion o f Sensors 86 6.3 Redundant Fus ion Results 87 6.3.1 F igure E ight Trajectory 87 6.3.2 Square Trajectory 89 6.4 Summary . . .91 7 Concluding Remarks 93 7.1 Summary and Conc lus ion 93 7.2 Recommendat ions 94 References 96 A ELD Architecture Classes 101 A . 1 E L D Archi tecture C lass Des i gn . .101 A .2 C lass CArch i tec ture Descr ip t ion 102 A .3 C l a s s C E L D Descr ip t ion .103 A .4 C lass CImage Descr ip t ion 108 A . 5 Structure and Da ta Def in i t ions 109 A.5.1 E L D Messages and Names 109 A.5.2 Sensor Input/Output Structures 110 A . 5.3 Commun i ca t i on Structure I l l B Matlab Redundant Fusion Algorithms 112 B. l Redundant Fus ion Funct ions 112 B. l . l T w o Measurement, One-D imens iona l Redundant Fus ion Func t ion 112 B. 1.2 T w o Measurement, Two-D imens iona l Redundant Fus i on Funct ion 113 B. 1.3 Three Measurement, Two-D imens iona l Redundant Fus i on Func t i on 116 B. 2 Test Suites . .119 B.2.1 T w o Measurement, One-D imens iona l Redundant Fus ion Test Sui te ..119 B.2.2 T w o Measurement, Two-D imens i ona l Redundant Fus i on Test Su i te . . . : . . . . . . . . : 119 r B.2.3 Three Measurement, Two-D imens i ona l Redundant Fus i on Test Suite 121 C Calibration Data 123 C. l Reference Frame Ca l ibrat ion . . . . . 123 C.2 Sensor Measurement Var iances . . 1 24 VI List of Figures 1.1. Au tomat i on workce l l w i th manipulator mounted p lasma cutt ing too l .'. 2 . 2.1. M o b i l e Robot Examp le o f B rooks ' Subsumpt ion Arch i tecture 9 2.2. The structure o f a L og i c a l Sensor 11 2.3. L o g i c a l Sensor network for a range f inder ...........12 2.4. Zeroth-order Sugeno-type Fu z zy Inference System for the Fus i on o f Sensor Da t a 15 2.5. Gauss ian Probab i l i ty D ist r ibut ion 16 2.6. Two-d imens iona l Uncerta inty E l l i p so i d 18 3.1. Two-D imens i ona l Gauss ian D is t r ibut ion 22 3.2. One-d imens iona l fus ion o f two measurements us ing the G R F method 25 3.3. Case 1 - N o Measurement Dispar i ty 26 3.4. Case 2 - Measurements "agree" 27 3.5. Case 3 - Measurement 's disagree .27 3.6. Case 4 - Measurement error 28 3.7. One-D imens iona l Measurement Spac ing Vectors 30 3.8. N -D imens iona l Measurement Spac ing Vectors 30 3.9. Measurement Spac ing Vectors and the Separation Vec to r 31 3.10. Case A - Fus i on o f two equal measurements 33 3.11. Case B - Fus ion o f two measurements separated by one standard dev iat ion 33 V 3.12. Case C - Fus i on o f two measurements separated by two standard deviat ions 33 3.13. Case D - Fus i on o f two h igh ly separated measurements 33 3.14. Uncerta inty Sca l i ng Funct ion for T w o Measurements .35 3.15. The Genera l Uncerta inty Sca l ing Func t ion 37 3.16. Redundant Fus ion Numer i ca l Examp le .39 3.17. D imens i ona l Ex tens ion o f two one-dimensional measurements 41 3.18. Certa inty mod i f i ca t ion based on expert knowledge 43 3.19. Camera image range enhancement set-up 45 vi i 3.20. Range Enhanc ing Complementary Fus ion Examp le . 45 3.21. Project ion and reconstruct ion o f a two-d imens ional e l l ipso id 46 3.22. The project ion o f a three-dimensional e l l ipso id into a two-d imens iona l e l l ipso id 49 3.23. Laser Range F inder Measurement 50 3.24. Measurement Uncerta inty i n (x,y) Space w i th Large Uncertaint ies .....51 3.25. Measurement Uncerta inty in (x,y) space wi th Sma l l Uncerta int ies 52 4.1. The Encapsulated Log i c a l Dev i ce ( E L D ) 56 4.2. E xamp l e E L D Sensor Fus ion M o d u l e 58 5.1. Fus i on o f T w o One-D imens iona l Measurements - Series One 66 5.2. Fus i on o f T w o One-D imens iona l Measurements - Series T w o 67 5.3. Fus i on o f T w o Two-D imens i ona l Measurements - Results Set One 69 5.4. Fus i on o f two two-d imens ional measurements - Results Set T w o . . . .70 5.5. Fus i on o f two two-d imens iona l measurements - Results Set Three 71 5.6. Fus i on o f Three Two-D imens i ona l Measurements - Examp le One 73 5.7. Fus i on o f Three Two-D imens i ona l Measurements - Examp le T w o 74 5.8. Fus ion o f Three Two-D imens i ona l Measurements - Examp le Three .74 5.9. Fus i on o f Three Two-D imens iona l Measurements - Examp le Four 75 6.1. X Y Tab le Exper imenta l Setup 78 6.2. Top V i e w D iag ram o f the X Y Tab le 78 6.3. X - A x i s D r i v e and Encoder 79 6.4. Y - A x i s D r i v e and U l t rasound Sensor 79 6.5. X Y Tab le Hardware Setup 81 6.6. X Y Tab le E L D Archi tecture 82 6.7. Overhead Image Grabbed by the Image E L D 83 6.8. X Y Tab le Graph i ca l U se r Interface 85 6.9. Sensor Ca l ib rat ion Setup 86 6.10. F igure E igh t Trajectory Fus ion Results , . . .88 6.11. En la rged V i e w #1 88 6.12. En la rged V i e w #2 89: 6.13. Square Trajectory Fus ion Results 90 viii 6.14. Enlarged View #3 6.15. Enlarged View #4 A. 1. ELD Architecture Class Diagram List of Tables 3.1. E xamp l e Redundant Fus ion Measurement Data 24 3.2. Examp le Redundant Measurement Data 38 5.1. One-D imens iona l Redundant Measurements and Fused Results - Data 67 5.2. Fus i on o f T w o Two-D imens i ona l Redundant Measurements - Data 72 5.3. Fus i on o f Three Two-D imens i ona l Redundant Measurements - Da ta 75 6.1. Uncerta int ies o f X Y Tab le Sensor Measurements 87 C . l . Ca l ib ra t ion Data 123 C.2. Measu r i ng Dev i c e Accurac ies 124 C.3. Sensor Var i ance Data . 1 2 4 Acknowledgements This thesis has come to be through the help and support of many people. Firstly, I would like to thank Dr. Elizabeth Croft for her supervision and guidance during my stay at UBC. You are always enthusiastic and helpful. I am grateful for your ideas and your mighty editing pen. To the members of the Industrial Automation Laboratory thank you for your friendship and the many helpful discussions. This work is a result of all of your efforts. David Langlois, thank you for all your hard work. I am continually amazed by your persistence and unselfish help. Sonja Macfarlane, thank you for your friendship and the brain power that you lent to me from time to time. I gratefully acknowledge the financial support that I received from the Natural Sciences and Engineering Research Council of Canada, the BC Advanced Systems Institute and the UBC top up program. This support made this work a reality. I would also like to thank my family. My parents, Dave and Carol Elliott, you are always with me, support me and pray for me. I am grateful for your love. My brother Jeff, you are my best friend. I always look up to you and am impressed by you. To Ronald and Beverly Glauser, you have helped more than you know in getting me to this point. Thank you for your friendship, your home and all of your support. I am grateful. Shannon, you are the flower of my life. Thank you for your love. It is my refuge. Lastly, and most importantly, I would like to thank the Lord Jesus without whose help and support I would not be who I am. Thank you. xi Chapter 1 1 Introduction 1.1 The State of Automation in Industry Automation systems are increasingly common in a wide variety of industries ranging from organic material production in green house operations to the assembly lines of automotive manufacturing facilities. In 2000 the industrial robot industry generated 5.1 billion dollars US in worldwide sales bringing the total number of robotic manipulators in use to 742,500 [41]. Automation systems vary widely in design and packaging. Some systems are highly distributed - through a large factory or even globally through a multinational corporation. However, a large proportion of automation systems are workcell based. Automation workcells are locally contained systems, composed of a number of sensors and actuators that are integrated to perform a predefined set of operations. An example system, shown in Figure 1.1, consists of a robotic manipulator equipped with an end-effector mounted plasma cutter, designed for cutting sheet metal. Typically, such systems are not concerned with the data transmission delays that are inherent in distributed systems. Figure 1.1: Automation workcell with manipulator mounted plasma cutting tool (Courtesy of the University of Laval Manufacturing Process Laboritory). The hardware and software used to implement automation systems is also varied. The Programmable Logic Controller (PLC), offered by a number of companies including Seimens and Allen-Bradley, is the de facto standard hardware/software platform for the implementation of automation systems. These systems offer dependable operation, however, they have limited functionality. The utilization of the Personal Computer (PC) platform coupled with peripheral input/output capable hardware is becoming more common. This platform offers low cost, increased processing speed, improved reliability, flexibility and the possibility of integration with a wide variety of other applications and systems. Microcontrollers are also gaining popularity in automation applications for their small size and low cost. The software platforms and tools utilized in automation systems are even more diverse than the hardware platforms described above. Each hardware platform typically supports a number of software choices. Proprietary software packages, such as image processing software tools, are typically designed for specific tasks or setups. This makes integrating and modifying components difficult, if not 2 impossible. Often the purchaser, who is designing the automation system, is not given access to the internal workings of the system. On the other hand, a basic programming language, such as C++, could be used to design the system. This solution offers flexibility and full access but has no pre-designed functionality. Given the wide range of hardware and software platform choices available it is not surprising that an ad hoc approach is often used for automation system design. Often, industrial automation systems consist of components that are pieced together using dissimilar hardware and software sourced from a number of different companies. As much as 50-60% of the design and implementation time of automation workcells is spent on component integration [41]. The result of ad hoc design is inefficient implementation, poor maintainability and increased obsolescence [35]. The cost of an automation system ranges from thousands to millions of dollars. The labour required for integration of system components amounts to as much as 50% of the overall cost of designing an automation system [41]. Therefore increasing the efficiency of implementation and design would have large economic benefits. Albus [35] states that the largest financial gains are to be found in system integration, training and maintenance. The cost of component integration has impeded developments towards reliably operating intelligent industrial robotic systems [41]. The maintainability of an automation system depends on factors including system complexity, component interconnectivity and dependence and availability of diagnostic data and tools. The complexity of a system naturally increases as the complexity and number of operations it performs increases. However, the complexity also increases with increased variety of hardware and software used, and with a lack of uniformity in system wide design methodology. Furthermore, system maintenance becomes more difficult as the components of the system become more interconnected and dependent on each other. In a highly interconnected system the failure of a single component will cause the failure of a number of other components, making it difficult to pinpoint the source of a failure. Additionally, modifications to a component can potentially cause other dependent components to fail. The maintenance of automation systems is aided by diagnostic tools and the openness of a system's architecture. Diagnostic tools become difficult to implement and use i f the components of the system vary in implementation methodology. Further, i f the system is not open (meaning that all system components are viewable and modifiable) maintenance can become difficult as not all operational data may be viewed. An automation system becomes obsolete when it is not financially viable to continue to use the system or modify its functionality as needed. Slow speed, low accuracy and poor reliability of components, as well as changing system requirements, could cause obsolescence. In this case, the inadequate 3 components could be upgraded or functionality added to the system. However, if the system cannot be easily and economically modified due to a non-modular design, then the result is obsolescence. An easily modifiable and adaptable design could help to avoid the cost of obsolescence. 1.2 Systematic Design and its Benefits A systematic design methodology encapsulated in an architecture will enhance the implementation efficiency and maintainability of automation systems [33][41 ]. An architecture is a framework that specifies the design of the components of a system, the connections between components and the locations of data storage in the system. An automation system architecture should ensure modular component design. A modular design • decreases the system's design complexity by compartmentalizing the design problem, thus increasing the implementation efficiency. Modularity also increases the maintainability and adaptability of the system. A modular component is easily replaced by a new component even if the new component is internally dissimilar to the old component. In fact, components with identical inputs and outputs are completely interchangeable. Further, a modular system's functionality is easily advanced since new components added will not affect existing components. This is a similar argument as to Object Oriented Design (OOD) in software engineering [37]. An automation system architecture should also be open. An open architecture allows the system designer and maintainer to view the internal workings of the components. This permits use of diagnostic tools in design, testing and maintenance. The need for open architectures is recognized as well for C N C machines [35]. A design framework or architecture, applied to automation workcells enables efficient implementation, maintainability and adaptability of the system throughout the entire life cycle of the system. 1.3 Towards Robustly Operating Automation Systems It is important for an automation system to operate reliably under all operating conditions. The ,. industrial environment can be controlled relatively tightly, however, variances can never be fully eliminated. If an automation system does not adapt to changing conditions then operational failure will result. The failure could be unsatisfactory performance, resulting in defective product and lost . manufacturing time. Failure could also result in equipment damage, such as broken tooling, fixturing and sensors. These failures have undesirable financial consequences that should be avoided in an economically driven industrial environment. The present approach to designing an automation system is to ignore the uncertainty in operating conditions. The variability that the system will encounter is minimized through precise fixturing and by designing operations that are "easy" for the system to perform. Typically, the task level operates in open 4 loop. If a fault occurs with the fixturing, product or automation mechanisms an operational failure results since recovery is not possible. This limits the complexity of practical operations and restrictions are placed on the design of the product itself. This is illustrated in the following example. A plastic automotive fuel tank manufacturer attempted to automate a part deflashing operation. The manual operation involved removing excess plastic from a blow moulded part using a knife. A worker accomplished this immediately after the moulding process while the plastic was still hot. The worker was able to easily follow the pinch line with a hot knife. However, a high occurrence of injuries prompted an automation attempt. A large Fanuc robot was equipped with a heated knife, a precise fixture was developed and the desired trajectories were programmed. The automated process had a high rate of failure due to the cooling and shrinking of the part. Tools were broken and parts were scrapped. Modifications were made to the knife adding a spring system that would allow the knife to follow the pinch line. The pinch off in the mould was also modified to make the deflashing process easier for the robot. However, in the end the project was scrapped in favor of some knife proof smocks for the workers [10]. While in this particular case, a simple and effective solution was found; environmental, safety or quality concerns may often demand a better automated solution. Coupling the automation system to the environment through an array of sensors can overcome such problems as those encountered in the previous example. In this way the system is able to respond and adjust to changing operating conditions. Instead of hard coding the model of the workspace in the design of the system the actual workspace can be sensed and used as a model [6]. While this approach requires more computational and capital resources, the result is reliable operational performance with decreased downtime, a higher quality product and the capability of autonomous decision-making. It is also possible to increase the operational reliability of an automation workcell by designing the system to make decisions based upon an appropriate level of uncertainty. This requires quantifying the uncertainty of data for use in functions that direct the operation of the system. Further increases in reliability can be achieved by increasing the quality of sensor data through fusion [25]. Every sensor has a level of measurement noise and bias and could possibility produce erroneous readings. Sensor failure or inaccuracy could result in system failure. Therefore, it is beneficial to use multiple sensors that view the workspace in both a redundant and complementary manner. In this way their knowledge may be fused, thus increasing the accuracy and quality of the data. This will also increase the reliability of the operation of the system in the case of sensor fault and enable autonomous recovery from sensor failure. 1.4 Project Scope and Objectives The objective of this project is to make a contribution towards the development of an automation architecture that is efficiently implemented, maintainable and reliably operating. The contribution that this project makes is the specification and implementation of a modular architecture that allows the intelligent integration of sensors and actuators in an industrial automation workcell environment. This work is also concerned with developing tools for fusing redundant uncertain sensor data within this architecture. The integration of complementary uncertain sensor data is only addressed for some specific cases in this work. This work does not address the problem of how to reason with uncertain data or gathering evidence. Instead, this work focuses on providing better data as it is collected from multiple sensors. Further, the use of uncertain data in the control of actuators is not within the scope of this project. However, this topic has been addressed in related work in the Industrial Automation Laboratory [20]. 1.5 Outline The structure of this thesis is outlined as follows: Chapter 1 Introduction: This introductory chapter. Chapter 2 Literature Review: Literature covering the areas of both automation system architectures and fusion of uncertain sensor data. A statement about the approach and limitations of this work is made. Chapter 3 Uncertain Data Fusion: Tools developed for the fusion of uncertain sensor data. Redundant fusion is investigated in detail and some specific cases of complementary fusion are discussed. Chapter 4 The Encapsulated Logical Device (ELD) Architecture: The ELD Architecture is presented as an architecture that enables the intelligent integration of multiple sensors and actuators in an industrial automation workcell environment. The architecture is designed to allow the integration of a wide variety of both sophisticated and simple devices. Chapter 5 Redundant Fusion Simulation Results: The performance of the proposed redundant fusion method is investigated through a variety of simulation examples. Chapter 6 Application Example: The development of a simple two-degree-of-freedom robot using the E L D Architecture is presented. Redundant fusion within this system is also presented. Chapter 7 Conclusions and Recommendations: A conclusion to this thesis is given including a summary of the contributions made. Recommendations for future work towards the completion of an effective automation system design tool are given. 7 C h a p t e r 2 2 Literature Review In this chapter, relevant research in the area of sensor and actuator architectures, sensor fusion and data uncertainty is presented. The majority of research in this field has focused on intelligent robotics and, specifically, on autonomous mobile robotics. This thesis utilizes this previous work and applies it to the area of industrial automation workcells. 2.1 Sensor, Actuator and Control Architectures System architectures have been a focus of current research in a number of different fields including autonomous robotic systems, real-time control systems, and sensor integration systems. Various architectures, as they have been designed for these fields, are discussed below. 2.1.1 Autonomous Robotic System Architectures Autonomous robotic systems typically focus on task planning, decision making and world modeling. The goal of this field of research is the eventual development of human-like robotic behaviour. Brooks in [5] proposed the Subsumption Architecture, which provides a purely reactive mechanism for generating system action. Individual behaviours are designed such that when a set of conditions is met the action associated with that behaviour is engaged. For example, a mobile robot would avoid collision with objects in its environment by having a behaviour that would move the robot away from obstacles when one is detected. The individual behaviours are arranged in a hierarchy such that lower level behaviours override higher level behaviours. Figure 2.1 shows an example subsumptive architecture for an exploring and obstacle-avoiding robot. Behaviour based systems such as this experience a tight coupling between 8 sensing and action, creating a system that is responsive to its environment. The overall intelligent behaviour of a subsumptive system emerges through a combination of lower level behaviours. While this architecture is effective for small systems it has yet to be shown tractable and reliable on a large-scale industrial automation system. Sensors Reason About Behavoiur of Objects Plan Changes to the World Identify Objects Monitor Changes Build Maps Explore Wander Avoid Objects -> Actuators Figure 2.1: Mobile Robot Example of Brooks'Subsumption Architecture. Murphy proposed the SFX Architecture in [28] which is based upon the action-oriented perception paradigm. This theory presents sensor fusion as an intelligent process able to set perceptual objectives, determine a sensing plan and adapt to malfunctions. The architecture encompasses the sensory system allowing for the generation and execution of active sensing plans. Further, this architecture uses a Dempster-Shafer mechanism for managing uncertainty in sensor measurements [29]. Architectures, such as these, developed for autonomous robotic applications typically treat low-level sensing and control of actuators as a black box. Subsequently, this part of the system is implemented in an ad hoc fashion. Additionally, these systems focus on the ability to emulate human-like intelligence and do not address the issues of industrial applicability such as maintainability and reliability. 2.1.2 Real-Time Control Architectures The Real-time Control System (RCS) [2] is an architecture designed for intelligent real-time control systems. Further, the NASA/NBS Standard Reference Model (NASREM) architecture [3], based on the RCS, specifically targets telerobotic system control. These architectures are a hierarchical graph of processing nodes that consist of sensory processing, world modeling and task decomposition modules. The nodes are arranged using characteristic functionality and timing. Components at various levels of the hierarchy are designed for specific functions and there is high interconnection between components. The Open Control Platform (OCP) is an Object Oriented software architecture designed as part of the Software Enabled Control (SEC) project with the goal of enhancing the ability to analyze, test and develop control systems and embedded software [33]. This system controls the execution and communication of embedded system application components, provides a simulation environment where components can be tested in a virtual environment and interface to useful design and simulation tools such as M A T L A B . While these architectures have made significant contributions to real-time control system design they do not address the fusion of sensor data or the management of the uncertainty of sensor data. Additionally, these architectures have yet to be shown as usable and feasible in an industrial automation application. 2.1.3 Sensor Integration and Fusion Architectures Sensor integration and fusion research has focused on the combination and utilization of sensor data. Many varied schemes have been proposed that address this problem. Luo and Kay [23] provide a good survey paper of this work. One approach presented in [22] by Lou and Lin discerns sensor data utilization based on the spatial proximity of the feature being detected and the level of detail required. According to their work, sensors for robotic applications can be grouped into one of the four categories of 'Far Away', 'Near To', 'Touching', and 'Manipulation'. The level of detail required determines the level of sensing used. A different approach to sensor fusion involves artificial neural networks, which provide a unique, biologically inspired approach to the fusion problem. A neural network model, based on the barn owl's characteristics, has been presented in [32] that fuses visual and acoustic information. Neural networks, while offering advantageous properties such as adaptability and learning, are not maintainable, have a closed architecture and provide a low level of confidence in reliability of operation over the entire operating range. At a lower level, the National Institute of Standards and Technology (NIST) and the Institute of Electrical and Electronics Engineers (IEEE) have proposed the IEEE-PI451 Standard for a Smart Transducer Interface for Sensors and Actuators [15][16]. This work is concerned with developing a common communication interface at the transducer level to enable the plug and play of sensors and actuators in a network of smart transducers. This work only addresses the physical connection between sensors and actuators and does not specify how to design an automation system or how to integrate sensor data in an automation system. 10 2.1.3.1 Logical Sensor Architectures The Logical Sensor (LS) has been chosen by a number of researchers as the base for development of sensing and control architectures. Henderson and Shilcrat first proposed the Logical Sensor System in [11], later adding a controlling signal to the architecture in [12]. The structure of the LS is shown in Figure 2.2. Characteristic t Output Vector Logical Sensor Name Control Commands! Selector Control Command Interpreter Program 1 Program n Logical Sensor Logical Sensor Commands to Inputs Inputs Logical Sensors Figure 2.2: The structure of a Logical Sensor (Adapted from [12] Figure 2). A LS is an abstraction of a sensor where the physical sensor is separated from its functional use. LSs can be designed to sense a specific feature and the detection mechanism is transparent to the overall system. For example, a watermelon detecting LS could be designed that processes the data from an array of inputs such as colour, mass, volume and shape logical sensors. Each LS is an encapsulated module that contains all knowledge and functionality required to operate as an independent sensing entity. A LS architecture is formed by the hierarchical combination of a number of LSs. Within the architecture each LS is only able to communicate to those LSs that it is directly connected to. The properties of the LS make this architecture scalable and modular and following the introductory discussion of this paper, this is important for industrial applications. An example of a LS architecture for a range finder that utilizes stereo vision and an ultrasonic sensor is shown in Figure 2.3. 11 Range Finder Stereo r r Fast Stereo Slow Stereo t • / I 1 1 1 1 Ultrasonic Sensor Camera 1 Camera 2 Figure 2.3: Logical Sensor network for a range finder (Adapted from reference [25] Figure 7). Weller et al. [40] integrated error detection and recovery into the LS specification using knowledge bases localized in each LS. Zheng [43] integrated a LS network into a hierarchical robot control architecture. The reported advantage of using LSs in this application was the increase in intelligence at all levels of the control architecture due to the sensing available. In Zheng's work the LS hierarchy was separate from the control structure hierarchy. Later Budenske and Gini [8] included actuation in their Logical Sensor/Actuator architecture. However, this work focused on plan execution through the decomposition of a high level abstract goal into a real-time detailed plan and was not concerned with the real-time control of actuators. Naish [30] presented the Extended Logical Sensor Architecture (ELSA) where feature based object models were used to construct the architecture for a specific sensing application. The architectures discussed are useful for the integration of multiple sensors in an intelligent automation system. The following section focuses on sensor fusion and integration, and specifically, on mechanisms used for propagating uncertainty throughout an architecture. 12 2.2 Sensor Fusion and Integration The use of multiple sensors results in increased sensor measurement accuracy as is shown by Richardson [36]. He proves that additional sensors will never reduce the performance of the optimal estimator. Further, system operational reliability will increase with additional sensors since the system is more resilient to sensor failure [25]. It is also possible to perceive features with multiple sensors that are not perceivable when using a single sensor. These benefits enable increased intelligent behaviour in an automation system. The following section presents definitions and background concepts useful in the area of sensor fusion and integration. A discussion of the relevant sensor fusion approaches involving uncertain data follows, including Bayesian and Dempster-Shafer Theories, Possibility Theory and Geometric Uncertainty Ellipsoids. 2.2.1 Definitions and Concepts Sensor fusion is defined as the combination of data from multiple sensor sources into one representational format. Sensor Integration has been differentiated from sensor fusion in [25] to be the synergistic use of information from multiple sensor sources to aid in the operation of a system. In this framework fusion is concerned with the actual combining of data whereas sensor integration is the general use of multiple sensors with the goal of obtaining more intelligent sensing at all levels of abstraction. There is also a distinction made regarding the level at which data is fused. There are three categories: data fusion (low-level), feature fusion (intermediate-level) and decision fusion (high-level) [17]. The data level typically involves fusing (mostly unprocessed) sensor data such as encoder and range finder outputs. . The feature level involves fusing higher-level information, such as the colour or shape of an object. At the highest level, decision level fusion involves fusing information such as 'the object is a red apple' with 'the object is a semi-red apple'. Information from multiple sources can be classified as independent, redundant or complementary [25]. Independent information is entirely unrelated and is not useful for fusion. Redundant information is the result of multiple sensors viewing the same feature, in the same space, possibly using entirely different means. Complementary information is the result of multiple sensors viewing a feature in different spaces or different ranges of a space. Sensor Data Uncertainty is a quantification of the limits of error in a measurement. Uncertainty results from many sources of imprecision and bias and is inherent in sensory systems. Therefore a well designed automation architecture, together with its fusion and integration mechanism, must be able to cope with uncertainty. A workshop on Multisensor Integration in Manufacturing Automation [13] concluded, in studying sensor uncertainty, that uncertainty should be represented at all levels of the architecture and that 13 it should be explicitly represented. Two fundamental questions were fonnulated at the workshop; (1) which formal description of uncertainty should be used in a multisensor fusion architecture; and (2) what mechanism should be used to move uncertain information through the architecture. There have been a number of proposed approaches to these problems including Bayesian and Dempster-Shafer Theories, Possibility Theory and Geometric Uncertainty Ellipsoid based methods. These are discussed below. 2.2.2 Bayesian and Dempster-Shafer Representations In the area of expert systems and decision support systems the problem of drawing conclusions from incomplete and uncertain data has been investigated at length. This work centers on decision-level fusion. The problem addressed in this work is usually formulated as determining the uncertainty of a fact in the light of new additional evidence or information. Both Bayesian and Dempster-Shafer approaches have been used in this application and the superiority of one method over another has been debated [21][23]. The Bayesian method is based in probability theory. This method has been criticized for the large amount of statistical data required to design a reliable system [38]. Further, Bayesian theory fails to distinguish between statistical uncertainty and ignorance or lack of knowledge. There is an implicit assumption that all relevant probabilities are known with precision while the probabilities themselves describe random events [38]. On the other hand, the Dempster-Shafer model allows for differentiation between disbelief and the lack of belief, that is, between having evidence that disconfirms a hypothesis and a lack of evidence that confirms the hypothesis. Murphy [29] provides a method for sensor fusion at the symbol level using Dempster-Shafer theory. 2.2.3 Possibility Theory and Fuzzy Logic Fuzzy logic was introduced as a method that models the vagueness of natural language [39]. Fuzzy logic does not separate logic and probability and is a descriptive language for modeling uncertainty [38]. There have been many fuzzy approaches to the sensor fusion problem [42][9]. Mahajan et al. [26] presented a fuzzy inference system that integrates sensor measurements of operating conditions, such as temperature, as weighting factors utilized in redundant sensor fusion. As is shown in Figure 2.4, the operating parameters of strain and temperature are used to determine the contribution of a measurement in redundant fusion. The uncertainty of a measurement is essentially determined though expert knowledge and complementary sensor measurements. 14 1. Inputs Fuzzification 2. Fuzzy Operations 3. Fuzzy Implication \ , o w 1 \ l o w t low 1 • • V. 0 0 J 0 j t ; [__• m 1. I f strain is low or temperature is low then weight is low 1 0 -100 J tokay > J +200 2. I f strain is okay and temperature is okay then weight is high strain= 175 Temperature-10 Input 1 Input 2 4. De fuzz i f i ca t i on Output 0 weight=59.4 Figure 2.4: Zeroth-order Sugeno-type Fuzzy Inference System for the Fusion of Sensor Data (Adapted from |26| Figure 3). Fuzzy logic is a useful tool in such situations as it is not always necessary or beneficial to exert energy and cost into precisely defining relationships between sensor measurements and Operating conditions. Further, expert knowledge about the effect of parameters on the certainty o f sensor data can be embedded in the form of fuzzy rules. 2.2.4 Geometric Ellipsoid Representation The Geometric El l ipsoid Representation is a geometric parameterization of probability density functions. This representation is based upon the assumption that uncertainty can be quantified by using Gaussian probability density functions. This assumption is made based on computational ease since this distribution is easily parameterized using standard deviation, a , and mean, p. The Gaussian curve is shown in Figure 2.5 and is given as, I 2<r-2;rcr (2.1) 15 fi(x) 0.4/ J0 3-1 0.2-/ 0.1-- i r—|—'1 I ' 1—1—1—•—i—rn-Figure 2.5: Gaussian Probability Distribution ( u=0, o=l). The geometric ellipsoid representation has been chosen as the most suitable representation of uncertain sensor data for this application. Contributors to the Multisensor Integration in Manufacturing Automation Workshop agreed that probability models best suit data at the physical or geometrical level and that another representation may be more appropriate at the symbol level [13]. The conceptual simplicity that uncertainty ellipsoids allow, especially in higher dimensional spaces, makes industrial acceptance of such a system more likely. Further, being a simple parameterization of the well established Gaussian distribution provides a firm statistical basis. This allows the quality of sensor data to be expressed using a standard deviation measure determined through experimentation. However, the Gaussian assumption is criticized since random processes do not necessarily follow the Gaussian distribution. For example, laser range finders have been noted not to follow such a distribution [13]. The choice of which distribution to use is usually concerned with efficiency and simplicity rather than the "real" distribution [7]. The tradeoff between having highly accurate distributions and having a simple and useable system is a decision that is best determined by the application. A system that is . complicated and inefficient will never find acceptance in industry. Marzullo proposed a simple geometric based fusion method using upper and lower bounds for each sensor measurement [27]. This method was based on the assumption that i f two sensor measurements are correct then their intervals will overlap. The overlapping region of the measurements was taken as the most reliable measurement under the assumption that a limited number of sensors could fail. This method did not indicate how the sensor bounds are determined and had no basis in probability theory. 16 Nakamura proposed a Geometrical Redundant Fusion (GRF) method for multi-sensor robotic systems in [31]. This method is capable of fusing any number of //-dimensional measurements. This method is, measurement noise follows a Gaussian distribution and that there is no bias in the measurements. The fusion method is statistically based and is computationally efficient. Additionally, as shown in [31], the method results in an algorithm similar to those obtained by Bayesian inference by minimum variance estimate, Kalman filter theory and the weighted least squares estimate. The derivation of the Uncertainty Ellipsoid is given below while the details of the derivation of Nakamura's method are left until Chapter 3. 2.2.5 Uncertainty Ellipsoid Derivation Consider m sensors measuring the same feature in the same //-dimensional domain. The /''' sensor measurement is given as x„ which is an //-dimensional vector. The uncertainty of the measurement can be written as follows: where x is the true measurement and dx is the i" measurement's noise, which is assumed to follow a Gaussian distribution. The mean of Sx , Sx is 0 as a result of the Gaussian assumption and the variance of Sx is given as follows: based upon the Uncertainty Ellipsoid, as described in the following section, and it assumes that x. =x + Sx, (2.2) VanSx \= (2.3) 2 where cr ; is the covariance matrix which is referred to as an //-dimensional uncertainty ellipsoid matrix defined in the principal axes frame, 3 . The sensor measurement, x„ is transformed according to the following equation, which can be linearly approximated by, y. =f(xl) + J(xi)&ci, (2.5) where, dx. (2.6) 17 is the Jacobian matrix of / with respect to x,. The mean, y , and variance of the transformed sensor measurement is then given as, y .=/(£,) (2-7) Var\y\=Jpo}jT. (2.8) Therefore, in general the uncertainty ellipsoid is a fully populated matrix and only becomes diagonal when expressed in the principal axes frame. The diagonal entries of the uncertainty ellipsoid matrix (the eigenvalues) expressed in the principal axes frame define the dimensions of the principal axes of the ellipsoid. Figure 2.6 displays a two-dimensional uncertainty ellipsoid with its associated uncertainty ellipsoid matrix. i ~ 2 r\ ~| / 0 2 \ y = 1 / 3 p P L 0 °U Figure 2.6: Two-dimensional Uncertainty Ellipsoid. One can note that the assumption in equation (2.5) requires that the transformations be linear or suitably approximated as linear in the region of interest. This is a somewhat restrictive assumption that is discussed in more detail in Chapter 3. Additionally, some very useful transformations are derived in Chapter 3 as well. Lee and Ro in [22] present a method of automatically reducing uncertainties in a robotic sensory system based upon geometric data fusion. A network of feature transformation, data fusion and constraint nodes manage the propagation of uncertainties throughout the sensing levels. Lou and Lin in [24] developed a redundant fusion method that utilized distance measures that specify the relative agreement or disagreement between sensors. If the measurements are close enough together, as defined by a chosen threshold, then they are useful for fusion. Measurements that are not close together are suspected to be incorrect. The largest group of agreeing sensors is utilized for fusion and the 18 others are discarded. This method is based upon a one-dimensional Gaussian distribution for sensor uncertainty and is suitable only for fusing one-dimensional data. The perimeter of the uncertainty ellipsoid, as derived above, is set at one standard deviation, l a . This results in a 0.68 probability that the ellipsoid contains the true value. Alternatively, the uncertainty ellipsoid could be sized to aa, where a is a constant, i f a larger or smaller range of allowable uncertainty is desired. In this work the l a uncertainty ellipsoid (a = 1) is used, as is common in the literature [22][31]. However, the same method could be used for other values of a. 2.3 Approach and Limitations Given the scope and objectives of Section 1.4, the LS has been chosen as the concept on which the E L D Architecture is based. It is the most applicable and sufficiently general framework available. This is evidenced by the number of researchers that have chosen this framework as the base for their developments. The reactive properties of the subsumptive architecture and the real-time considerations of Albus' work can be incorporated into the LS framework i f desired. The similarities of all these approaches are apparent. Further, the geometric ellipsoid representation has been chosen as the most suitable representation of uncertain sensor data for this application. The conceptual simplicity that uncertainty ellipsoids allow, especially in higher dimensional spaces, makes industrial acceptance of such a system more likely. Further, being a simple parameterization of the well established Gaussian distribution provides a firm statistical basis. However, relying on a Guassian distribution limits the type of sensor uncertainty that can be accurately modeled. This is a trade-off that is commonly accepted in industrial processes. The use of a geometric uncertainty ellipsoid representation does not preclude the use of Bayesian or Dempster-Shafer theories to govern higher symbolic level uncertain reasoning and evidence gathering. This, however, is beyond the scope of this work. Fuzzy Logic on the other hand, may be a good tool for this application. Fuzzy systems offer simple conceptualization and flexibility in the distributions used to represent uncertainty. However, a statistical base is the preferred choice for a first implementation due to its historical acceptance. Nakamura's GRF method [31] has been chosen as the most applicable redundant fusion mechanism available. This method is extended in this thesis and is used as a tool in the E L D Architecture. Its dimensional versatility make an ideal fit in the chosen framework. 19 Chapter 3 3 Uncertain Data Fusion Considering the range of approaches that result from the variety of fusion problems encountered in the literature discussed in Chapter 2, proposing a single algorithm for the fusion of all possible types of data is not feasible. This chapter details some sensor fusion mechanisms that are applicable to specific data cases and can be used within an industrial automation workcell architecture. Specifically, a redundant fusion mechanism that properly combines sensor data regarding the same feature is detailed. Based on a careful examination of this method, an expanded, more representative approach for redundant data fusion is presented. In the second section of this chapter, tools for specific cases of complementary fusion are outlined. Specifically examined herein are three complementary fusion mechanisms, namely: dimensional extension, uncertainty modification and range enhancement. The dimensional extension function is useful for combining measurements taken of the same feature in non-overlapping measurement spaces. The uncertainty modification mechanism adjusts the uncertainty of a sensor's measurement based on another sensor's measurement. The range enhancement method combines different measurements taken in different ranges of overlapping measurement spaces. In the latter portion of this chapter a method for projecting an ellipsoid into a lower dimension is outlined. This method is useful when only some dimensions of an ^-dimensional measurement are needed for fusion or processing. As was stated previously, the fusion functions developed herein assume a Gaussian distribution for the uncertainty of all sensor data. This is a consequence of the uncertainty ellipsoid representation. Further, only transformations that are linear in the coordinates of the measurement space are valid for ellipsoids. 20 This restricts the application of the proposed ellipsoid based methods to linear measurement spaces. This restriction and its limitations are discussed further in this chapter. 3.1 Redundant Data Fusion Redundant measurements are quantifications of the same feature in the same domain. The mechanisms used to generate redundant measurements can be completely dissimilar. For example, a laser range finder, an ultrasound sensor and a digital camera can redundantly measure the position of an object's surface along an axis. However, i f these sensors measure the position along different axes, then the measurement's spaces differ and the data is not fully redundant. The fusion of sensor data can result in an increase or decrease in uncertainty associated with the fused data. In either case this is an increase in the overall knowledge of the system. For example, i f redundant measurements "agree" then the knowledge is reinforced and the uncertainty of the fused result is less than that of the individual measurements. However, i f the measurements do not "agree", then there are possible sensor faults. Without additional knowledge to indicate which measurements are faulty, the uncertainty of the fused measurement increases. An informal quantification of sensor "agreement" is given in this section. The practical benefits of redundant sensors are maximized when sensors use entirely different sensing mechanisms. If sensors are affected by the same environmental parameters then no reliability gains are made with respect to those parameters. Consider two vision based sensors. Both sensors will yield inaccurate results when lighting conditions change dramatically. However, i f one sensor was replaced with a non-vision based sensor, then a reliable measurement would be available in all lighting conditions. As described in Chapter 2, Nakamura [31] proposed the GRF method for multi-sensor robotic systems. This method is capable of fusing any number of «-dimensional measurements. It is a statistically based method that assumes measurement noise follows a Gaussian distribution and that no bias exists. Typically, automation workcells have a small number of sensors and redundant fusion is performed with a limited amount of data. In such cases a statistically based method does not perform satisfactorily and thus an extension to the GRF method is required that utilizes heuristic based information. The following discussion begins with some useful properties of ellipsoids and the development of the GRF method. The shortcomings of this method are discussed and a Heuristic based Geometric Redundant Fusion (HGRF) method is proposed as an extension to the GRF method. The HGRF method allows reliable redundant fusion of a limited amount of data that may contain a bias. 3.1.1 Some Properties of Uncertainty Ellipsoids The uncertainty ellipsoid uniquely and exactly parameterizes a Gaussian distribution. This is apparent when the uncertainty ellipsoid is viewed as a topographical contour line (or surface in higher dimensions) of a Gaussian distribution. Consider the two-dimensional Gaussian distribution shown in Figure 3.1 and given as follows: 21 Figure 3.1: Two-Dimensional Gaussian Distribution (meanx=0, meany=0, a x=l, ay=l). A contour of the two-dimensional Gaussian curve of height h is given as ' x2 t y2 1 2<T2x+2<JI h = e K 2naxay After some manipulation the equation becomes (3.2) x 2 a] a; + ^ T = -2\n(2hmrxav)=c. (3.3) Equation (3.3) can be recognized as a two-dimensional ellipsoid. With the contour height, h, specified the ellipsoid uniquely parameterizes the Gaussian distribution without approximation. The uncertainty ellipsoid matrix is a dyad [14]. Therefore, a similarity transformation can be utilized to express the uncertainty ellipsoid in any linear reference frame. For example, to express an uncertainty ellipsoid expressed in the principal axes frame, 3 , in another frame, 3 m , the following similarity transformation would be utilized: a2 =mR„ cr2mRTn, (3.4) m — i m P p — i m P ' v ' where ,Rk is a rotation matrix defined between frame 3;- and 3 t . Scaling of an uncertainty ellipsoid in a frame of interest, 3O T , requires applying the scaling matrix, „,S, of the following form to the uncertainty ellipsoid: 22 5, 0 0 0 .s 2.-. (3.5) where s-, is the magnitude of the scaling of the ellipsoid in the direction of the i'h axis of 3 m . Scaling of an uncertainty ellipsoid expressed in 3 would take the form: CT =,„S (J. „,S III — It I" III —• I " ' (3.6) The rotation and scaling operations can be combined as in the fol lowing example. A n ell ipsoid, whose principle axes are rotated with respect to the world frame, 3 U . , is to be scaled in the world frame and then expressed in another frame of interest, 3,„. The transformation is given below as: m<Ls=mRw wS{wR,, p<l] PRw)wsT mRl • (3-7) 3.1.2 Geometric Redundant Fusion (GRF) The G R F method of fusing m uncertain //-dimensional data points, x. =[x, •••x„\, is based on a weighted linear sum of the measurements as follows: 5 / • (3.8) where x . is the fused value, and W-, is the weighting matrix for measurement /', x.. App ly ing expected values to (3.8) and assuming no measurement bias yields the condition in (3.9) 1=1 For a given data set, the weighting matrix, W, and the covariance matrix, Q, are formed as follows: W = [WX W2 ••• Wm], (3.10) .2 Q 0 0 CT2 w m (3.11) where M,cr. is the uncertainty ellipsoid matrix for measurement/. Nakamura proposed that minimizing the volume of the fused uncertainty ell ipsoid would result in more accurate and less uncertain information. The volume of the fused uncertainty ell ipsoid is given as: V = n/2 r(l + m/2) ^dzt'WQW7), (3.12) 23 where T(-) is the gamma function. The problem simplifies to minimizing det(IVQWT) subjected to the constraint of equation (3.9). Using Lagrange multipliers the following result for the weighting matrices and fused certainty or variance is obtained: f m . . A"' w, = z u r ur. - H E U ) ~ J In the one-dimensional case with two measurements the above result becomes: .2.. , _2. w<Tt x 2 + ,„cr2X| 2 2 (3.13) (3.14) (3.15) (3.16) Examples of the above case, for the measurement data given in Table 3.1, are shown in Figure 3.2 below. Measurement Mean Variance Measurement 1 7 0.25 Measurement 2 12 1 G R F Result (1 and 2) 8 0.2 Measurement 3 7 0.25 Measurement 4 8 1 G R F Result (3 and 4) 7.2 0.2 Table 3.1: Example Redundant Fusion Measurement Data. 24 Figure 3.2: One-dimensional fusion of two measurements using the GRF method. Top - spatially separated measurement means. Bottom - close proximity of measurement means. While the GRF method handles the fusion of m measurements in w-dimensional space in an efficient manner, it does not include information about the spacing of the means of the m measurements in the calculation of the fused variance. That is, the magnitude of the spatial separation of the means is not used in the calculation of the uncertainty of the fused result. Therefore, the GRF method provides a fused result with identical uncertainty independent of whether the measurements have identical or highly spatially separated means. This is demonstrated by comparing the top and bottom graphs in the previous one-dimensional example, Figure 3.2. The GRF method is based on a Gaussian assumption. Therefore, its results are most reliable when a large population of measurements exists. A small population of measurements (e.g. two or three), as 25 would be expected in many automation workcell fusion applications, does not provide sufficient data to establish a Gaussian distribution. Therefore, when fusing a small number of data points, a fusion method based purely upon statistical quantities provides misleading results. This is evident in the case of highly separated measurement means, Figure 3.2. To overcome this problem it is proposed that a heuristic function be introduced to extend the GRF method. This heuristic will include information about the separation of the means of the measurements in the uncertainty of the fused result. As a result, the output is no longer purely statistically based, but can still provide a reasonable measure of the increase or decrease of uncertainty of the fused data. 3.1.3 Heuristic Specification The desired heuristic would modify the GRF result so that reliable results are produced for all ranges of measurement disparity. The desired output of the heuristic is discussed below using four separate cases. The heuristic output should smoothly transition from one case to another. The heuristic specification cases should be interpreted along a dimension of the measurement space. For example, a particular measurement may fall into Case 2 along one dimension and into Case 3 along another. Illustrations of the fusion of two two-dimensional ellipsoids with parallel principal axes have been included as descriptive aids. In a general multi-measurement multi-dimensional space the following discussion becomes difficult to intuitively interpret since the measurements' ellipsoids do not necessarily have aligned principal axes. Consider a set of m measurements with means, m X., and associated uncertainty regions, St, based on the ellipsoid, m erf, centered at xt and described in frame 3m. Case 1: No measurement disparity along dimension d, , i = \,..., m . Heuristic Output: For this case the heuristic uncertainty region, (S/),/, should be equivalent to the uncertainty region generated by the GRF method, namely, (S/)ti =(SN)cl, along dimension d. Figure 3.3: Case 1 - No Measurement Disparity. 26 Case 2: Measurements "agree" along dimension d, (m x.) / e {s t•) V/', /, j * 7*. Heuristic Output: The heuristic uncertainty region, (S^ ,,, should be smaller than the minimum sized measurement uncertainty region, (S„,Jd, along dimension d. Alternatively stated, in this case the uncertainty decreases. Measurement 2 / M e a s u r e m e n t 1 Heur is t i c R e s u l t \ / Xf 1 % T T \ — A— G R F Result 1 / S/— S2=Sn,in Figure 3.4: Case 2 - Measurements "agree". Case 3: Measurements "disagree" along dimension d, (mx.) t i{s f)\/i,j,i * j and Heuristic Output: The heuristic uncertainty region, (Sj)lt, should be larger than the minimum sized measurement uncertainty region, (Smujd, along dimension d. Alternatively stated, in this case the uncertainty increases. Figure 3.5: Case 3 - Measurement's disagree 27 Case 4: Measurement error along dimension d, {S, \ t n {s^) e OV/, /, / * _/ . Heuristic Output: The resulting fused uncertainty ellipsoid should encompass the entire range of the measurement ellipsoids along the dimensions of measurement error. In this case the increase in uncertainty is strongly indicative of measurement error. Figure 3.6: Case 4 - Measurement error. In summary, it is proposed that the GRF method be revised to compensate for measurements with varying levels of disparity, due to bias or error. Ultimately, the output of the heuristic should be an adjustment of the GRF method's fused uncertainty ellipsoid. This adjustment should correctly correspond of the level and direction of measurement uncertainty as reflected in the level and direction of disparity between the original measurements. The development of a heuristic method that possesses these properties is given in the following sections. 3.1.4 Heuristic Based Geometric Redundant Fusion (HGRF) The derivation of the fusion heuristic begins with the simple case of fusing two one-dimensional measurements. Then the method is extended to the fusion of m measurements that are w-dimensional. The overall approach employed in the following heuristic methods is as follows. Each step will then be explained in detail in the following sections. I) Calculate the Fused Mean, xf, and Fused Variance, crj,, using the GRF method given in equations 4.8, 4.13 and 4.14. This yields the fused result with no adjustments for measurement bias or error. 28 2) Calculate the Measurement Spacing Vectors, v. , that quantify the vector distance between the measurements and the Fused Mean, xf. The Measurement Spacing Vectors provide ah: indication of the relative disparity of each measurement. 3) From the Measurement Spacing Vectors, v , the Separation Vector, k, is calculated. The Separation Vector quantifies the overall separation of the measurements with respect to the Fused Mean, x f . The objective of this step is to quantify the level of disparity between all the measurements. 4) Apply a Uncertainty Scaling Function, fc(k), to the Separation Vector, k, to determine the Scaling Factor, C(k). The Uncertainty Scaling Function determines how the GRF fused variance, a'N, should be adjusted. This adjustment depends on the size of the Separation Vector, k . 5) Apply the Scaling Factor, C(k), to the GRF fused variance, a2 , in an appropriate frame of reference to get the HGRF fused variance, a2.. 3.1.4.1 Measurement Spacing Vectors Figure 3.7 illustrates the Measurement Spacing Vectors, v , for the one-dimensional, two measurement case. In this case v is one-dimensional (a scalar). Figure 3.8 illustrates a twO-dimensional, two measurement case. The Measurement Spacing Vectors represent the amount of scaling of the standard deviations of the measurement, a., required such that the uncertainty ellipsoids of each measurement approach the fused mean, xf. In general, the required scaling is not uniform but instead is an w-dimensional scaling that rotates and magnifies the ellipsoid. xf is defined as Nakamura's fused mean given by equations 4.8 and 4.13. Based on the above description, the Measurement Spacing Vectors are defined using the following vector relation: x,=x.=a.v.+x.. (3.17) Rearranging equation 4.17 yields the Measurement Spacing Vectors, v , for an «-dimensional space: There is a one to one correspondence between Measurement Spacing Vectors and measurements. These vectors can be viewed as quantifing the distance of each measurement from the fused mean. This distance measure is normalized by the size of the uncertainty ellipsoid in a multi-dimensional fashion. 29 This group of Measurement Spacing Vectors is used to determine the Separation Vector, k. The establishment of this vector is considered in the following section. 0 07 V/ <J7 Vj f r 1 ' T XI " Xfus X2 Figure 3.7: One-Dimensional Measurement Spacing Vectors. Figure 3.8: N-Dimensional Measurement Spacing Vectors. 3.1.4.2 Separation Vector and the Scaling Reference Frame The Separation Vector, k , is a vector that compiles the individual contributions of the Measurement Separation Vectors, v . Thus k dictates the "stretch direction" and is used to determine the scaling applied to the GRF fused variance, <r . The Scaling Reference Frame, 3k, is the frame in which the GRF result is scaled. This will be explained further in this section. Figure 3.9 shows n Measurement Spacing Vectors, v , and the Separation Vector, k , for the three-dimensional measurement space case. 30 Figure 3.9: Measurement Spacing Vectors and the Separation Vector. The components of the Separation Vector, k, and the Scaling Reference Frame, 3 k , are determined using a variation of the Gram-Schmidt orthogonalization process [18] on the set of Measurement Spacing \ Vectors {v , / = 1 ...m}. The procedure is as follows: 1) The first axis of the Separation Vector, & , is set to the magnitude of the largest v as depicted in Figure 3.9. The following equation provides this result: = max(|v|)/ = l.../«. (3.19) 2) The second axis of the Separation Vector, k2 is set to the largest projection of a v on a plane normal to vf . This is accomplished by the following equation: * 2 f v. • k, \ = max v. *. — i —i / i = \...m,i±l. (3.20) 3) The remaining axes of the Separation Vector, k , are determined similarly according to the following equation: 31 k . = max j t i n L ^ ~J-] /c k,»k~l k.*k -•>-* (3.21) .1 - i -./-i -./-I where v has not been used previously to determine an axis of k . The Scaling Reference Frame, 3 k , is defined by the set of orthogonal unit vectors along \k.\. If the set of Measurement Spacing Vectors, {v , i = \ ...m), does not span the space of the measurements then the remaining components of k are set to 0 with their corresponding axis of 3 k chosen to be orthogonal to the set of unit vectors \k }. 3.1.4.3 The Uncertainty Scaling Function The following discussion details the development of the Uncertainty Scaling Function, fc (k). This scalar function uses the information contained in the Separation Vector, k , to determine an appropriate Uncertainty Scaling Matrix, C = fc(k), to apply to the GRF result. This function is developed to satisfy the heuristic specification of Section 3.1.3. The discussion begins with a review of some important details of the GRF method and then develops the Uncertainty Scaling Function for a simple case of two measurements of equal uncertainty. The function is then extended to the general case of multiple measurements. In the GRF method, the maximum decrease in data uncertainty occurs when the uncertainties of the input measurements are equal. The maximum decrease in uncertainty, for the one-dimensional, m-measurement case, is by a factor of l/-Jm . This can be seen by setting cr, = <r2 = • • • = am in equation (3.14). On the other hand, when all but one measurement has infinite uncertainty, (<r • =o~0,o~l,...,o~i,...,o~m = o o , r V / ) , then equation (3.14) returns a result with an uncertainty equal to the minimum uncertainty of the input measurements, (aN - er m i n ) . Therefore, the GRF method results in a fused standard deviation ranging between <rmin and <xmill /Vw , depending on the relative uncertainties of the measurements. Here, amin denotes the minimum standard deviation of the input measurements. This result correctly applies to the specific case of equal measurement means. Consider Figure 3.10 - 3.13, cases A through D. Two one-dimensional measurements, x/ and x2, of equal uncertainty are being fused. The GRF method produces results with equal levels of uncertainty [pN = cr n i i n /V2) for all four cases, shown centered about xN. 32 xf w Figure 3.10: Case A - Fusion of two equal measurements. ^ C J X, i ? X2 Figure 3.11: Case B - Fusion of two measurements separated by one standard deviation. J •4 Y Xi x2 Figure 3.12: Case C - Fusion of two measurements separated by two standard deviations. •< 6 J ± • T x, 1 1 Figure 3.13: Case D - Fusion of two highly separated measurements. In case A, measurement x, and x2 are equivalent. The Separation Vector, k is 0. Both the Separation Vector, k, and the Uncertainty Scaling Factor, C(k), are scalar in the one-dimensional case and are thus written as scalars in this section. According to Case 1 of the Heuristic Specification of Section 3.1.3 the 33 GRF method without adjustment is applied at k=0. Therefore the Uncertainty Scaling Matrix, C(k = 0), should be unity in this case. In case B, each measurement, x,-, is on the border of the other's uncertainty region, S^. From equation (3.18) and the algorithm in Section 3.1.4.2, k=0.5 for this case. This case represents the borderline between Heuristic Specification cases 2 and 3. That is, the measurements are between "agreement" and "disagreement". According to the Heuristic Specification, the results of fusion in this case should result in neither an increase or decrease in uncertainty of the measurements. Therefore, the GRF method provides a result that is too optimistic for this case. Applying a scaling of C(k = 1/2)= [V2~J to the GRF result for the fused uncertainty, c%, guarantees that the fused result will not be more certain than crm i n. In case C the measurement uncertainty regions, Sh only intersect at their borders. From Equation (3.18) and the algorithm of Section 3.1.4.2, k= 1 for this case. This case represents the borderline between Heuristic Specification cases 3 and 4. That is, the measurements are the point where if further separated then at least one measurement will be in error. According to the Heuristic Specification the result should include the entire region encompassed by the measurement's uncertainty ellipsoids. A scaling factor of C(£ = l)=[2V2] yields the desired result. Case D, where k>\, is representative of all cases where the uncertainty ellipsoids of the measurements do not intersect. This is equivalent to the Heuristic Specification case 4. This case indicates that there is an error with one or more of the measurements. Without additional information it is not possible to determine which measurement is in error. Therefore, the resulting fused uncertainty region should include the entire area that is spanned by the measurements and their uncertainties. The scaling factor, C(k), yielding such a result is derived for the one dimensional case as follows: f,case D separation k(crx +a2) + crx + a2 if <T| = a2 then o~N =- j=rcr , and V2 = (k +1)0-, = {k + \)d2aN :. C(k) = -J2k + V2 for k > 1 The above result is applicable for case D where k > 1. (3.22) 34 Combining cases A through C provides three points that the Uncertainty Scaling Function, C(k)= fc(k), should pass through for 0 < k < 1. Requiring that the slope offc(k) be 0 at k=0 and Jl at k=\ (so that fc(k) is continuous) provides two more constraints forfc(k) on 0 < k < 1. Fitting a fourth order polynomial through these points subject to the constraints yields: /c.(jfc) = C = l-l.l005*2 +8.1005*3-5.1716A:4, 0<>fc < 1 (3.23) Unfortunately this curve has a slight dip below 1 around k=0. This is not acceptable since a result more certain than the GRF result is not desired. A slight modification of case B such that C = [1.5] rather than C = [2V2~J=[l.4l] yields a curve that is monotonic. Therefore, the Two Measurement Certainty Scaling Function shown in Figure 3.14 is given by: (3.24) 0 0.5 1 1.5 Separation Parameter (K) Figure 3.14: Uncertainty Scaling Function for Two Measurements. 35 3.1.4.3.1 The General Uncertainty Scaling Function The previous section developed the Uncertainty Scaling Function, fc(k), for the case of fusing two measurements. Fusing higher number of measurements, m, in a one-dimensional space requires developing a similar curve to that of equation (3.24), according to the following specifications: In the region of 0 < k < 1 the curve must pass through the points: 1) / c ( * = 0 )= l , (3.25) 2) fc(k = 0.5) = 4m~, (3.26) 3) fc{k = \) = 24m~, (3.27) subject to the following constraints: ; dk] 2) * dk = 0, (3.28) AO - . m. (3.29) k=\ These three check points ensure, given the separation of the measurements, that the resulting fused uncertainty does not provide an unrealistically optimistic result. Fitting a fourth order polynomial to the above constraints yields the following curve: fc(k)=\ + {j4m-n)i2-(7V^"-18)c3 + (2yfm~-$)k4, 0<k<\. (3.30) In the region of k > 1 the curve must be fc(k) = 4mk + 4m~, (3.31) where m is the number of measurements. Combining equations 4.28 and 4.29 yields the General Uncertainty Scaling Function: f(k) = j 1 + ( 7 V ^ - l l ) t 2 - ( 7 V ^ - 1 8 ) : 3 + (2V^-8):4, 0 < £ < 1 ( 3 3 2 ) [ *Jmk + 4m, k >l This function ensures a smooth degrading of the level of fused measurement's uncertainty. A plot of this curve for different values of m is given in Figure 3.15. 36 General Uncertainty Scaling Function 0.5 1 SeDaration Vector, k 1.5 Figure 3.15: The General Uncertainty Scaling Function. This function has been developed using one-dimensional measurements but the Uncertainty Scaling Function,//^, is a useful heuristic for fusion in any number of dimensions. Application of this heuristic • to fusion of higher dimensional measurements is discussed in the following section. 3.1.4.4 The Uncertainty Scaling Matrix For the one-dimensional case the Uncertainty Scaling Matrix, C(k), is a scalar and the fused standard deviation is calculated via the following equation: af=C(k)aN. (3.33) Extending the above one-dimensional case to handle //-dimensional measurements involves considering the scalar quantities as //-dimensional vector quantities. In the case of //-dimensional fusion the GRF result requires scaling in all //-dimensions. The General Uncertainty Scaling Function, fc(k), derived for the one-dimensional case is applied to the //-dimensional case by composing a diagonal scaling matrix as follows: C(k) = { l + ( 7 ^ - n K 2 - ( 7 ^ - ' 8 K 3 + ( 2 ^ " ~ 8 K 4 ' 0<*,< '•' { 4mkj + V///, k,>\ (3.34) 37 where each dimension of the GRF result is scaled according to the corresponding dimension, i, of k. Due to the non-linearity of the Uncertainty Scaling Function, fL.(k), the Uncertainty Scaling Matrix, c(k), is not rotationally invariant. Therefore, the reference frame in which k is expressed determines the quantification of the Uncertainty Scaling Matrix, C(k), and, thus, the size of the fused ellipsoid. Therefore, it is necessary to determine the Uncertainty Scaling Matrix, C(k), in a consistent manner such that the result is independent of any particular reference frame. This is accomplished by applying the Uncertainty Scaling Function, ft.(k), when k is expressed in the Scaling Reference Frame, 3k, defined by the procedure in Section 3.1.4.2. Therefore (3.32) becomes: (3.35) Then the Uncertainty Scaling Matrix can be applied to the GRF uncertainty ellipsoid result through the following similarity transform: «-i S =wi ^* •* Cfe ^  ^ • w 2 S ^ • w 2 Cfe fc)-A / Z ^ , (3.36) where w]af is the HGRF ellipsoid matrix expressed in 3 W . , jRj is the rotation matrix from frame 3j to frame 3j, kc{kk) is the Uncertainty Scaling Matrix expressed in 3 k , and H,,cr 2 is the GRF ellipsoid matrix expressed in 3 W 2 -The HGRF method, as developed above, is based upon the l a uncertainty ellipsoid. A similar derivation could be made based upon the a a uncertainty ellipsoid which results in a horizontal scaling of a of the Uncertainty Scaling Function, fciak). 3.1.5 Numerical Example Consider the fusion of three two-dimensional redundant measurements. The measurement data is included in Table 3.2. X and Y are the sensed position of an object in a plane with respect to the world frame, 3 W . The scalars, a e x and a e y , are the sizes of the principal axes of the uncertainty ellipsoids with respect to the ellipsoid axes frames, 3 e i , 3 e 2 , and 3 e 3- The rotation angle, 6, is required to align 3 e i with : 3 W . A graphical representation of the measurement ellipsoids is given in Figure 3.16. Measurement Number X (mm) Y (mm) aex (mm) a e y (mm) 0 (degrees) 1 7 8 1 1.5 30 2 6 7 2 1 80 3 8 8 1 1 0 Table 3.2: Example Redundant Measurement Data. 38 10 E E, c g '55 o CL Meas HGRF Result Meas #3 GRF Result 6 8 X Position (mm) 10 Figure 3.16: Redundant Fusion Numerical Example. Using equations (3.13) and (3.14) the fused mean and GRF variance are calculated to be: x - x =[6.9540 7.7809], 2 w —* N 0.3531 0.0213 0.0213 0.5401 (3.37) (3.38) The GRF result is shown in Figure 3.16 as the dotted line. Note that the uncertainty ellipsoids of the measurements must be first expressed in 3W before applying equation (3.14). Next the Measurement Spacing Vectors, v., are calculated using equation (3.18) for each measurement. This results in the following: v^, =[-0.0105 -0.1577], (3.39) wv 2=[ 1.0066 0.4842], (3.40) wv3 =[-1-0460 -0.2191]. (3.41) Using the algorithm in Section 3.1.4.2 and the Measurement Spacing Vectors above, the Separation Vector is determined to be, , £ = [1.1170 0.2560]. (3.42) This vector, kk, is used to determine the Uncertainty Scaling Matrix according to equation (3.35) where m, the number of measurements, is 3. The Uncertainty Scaling Matrix, expressed in frame 3k, is then, 39 3.6667 0 0 1.1527 (3.43) This scaling is applied to the GRF variance, equation (3.38), using equation (3.36). This results in the HGRF result, expressed in 3 W , as follows: This uncertainty ellipsoid is shown in Figure 3.16 as the solid line. The HGRF result is more representative of the level of uncertainty of these measurements given their disparity of position, than the result produced by the GRF method. Further results of this method are presented in Chapter 5. The HGRF method, as developed in this section, suitably adjusts the GRF method's result based on the level of disparity between the measurements. The measurement disparity is quantified using the Separation Vector, k, as calculated from the Measurement Spacing Vectors, v . The heuristic based Uncertainty Scaling Function, fc(k), smoothly degrades the uncertainty of the GRF result as the level of measurement disparity increases, providing the HGRF result. This method is limited to redundantly fusing measurements whose uncertainty can be adequately represented as ellipsoids and is specifically designed for fusing a small number of measurements. 3.2 Complementary Data Fusion Complementary sensor measurements, in contrast to redundant measurements, acquire knowledge in spaces that do not overlap. Complementary measurements are useful for perceiving features in a workspace that would not be perceivable by using only one sensor [25]. A variety of application motivated methods for fusing complementary sensor data have been proposed in the literature. Reference [25] provides a comprehensive survey in this area. However, no framework has been proposed that allows a general architecture based application of complementary sensor fusion to automation workcell implementations. The development of such a framework would be beneficial to industrial implementations and future research. The following section presents some geometric based mechanisms for fusing complementary sensor data, directed towards initiating a general architecture based treatment, of the subject. Three specific cases of fusing complementary measurements are discussed in this work: (i) combining sensor measurements of the same feature taken in different dimensions, (ii) modifying the uncertainty of a measurement based on another sensor's measurement, and (iii) combining two sensor measurements taken in the same dimensions but over different ranges of sensing. There are many other types of complementary data and the implementation of complementary fusion is unique for each sensory application. The following sections outline a framework in which the specific complementary fusion 2 4.2576 2.1007 I V — / 2.1007 1.8348 (3.44) 40 functions of dimensional extension, certainty modification and range enhancement can be implemented. The following mechanisms are applied within the ELD Architecture, as described in Chapter 4. 3.2.1 Dimensional Extension The Dimensional Extension function combines multiple measurements, taken of the same feature in different dimensions, into a higher dimensional representation. An example of the application of this function occurs when the position of an object on a plane is measured by two laser range finders, each acting on different axes as shown in Figure 3.17. These are both one-dimensional measurements and their sensing axes are not necessarily orthogonal. The two-dimensional position of the object is determined by combining the measurements and their associated uncertainties through dimensional extension. Combined measurements 3v Figure 3.17: Dimensional Extension of two one-dimensional measurements The following approach is based on [31] and is reformulated within the context of the Dimensional Extension function. Again this derivation is based upon an assumption that the uncertainty of the measurements and the combination of the measurements can be adequately described using Gaussian distributions. Consider the following function that maps measurements in one space to another space: i = / y > (3-45) where x is an w-dimensional column vector representing an n-dimensional measurement expressed in an orthogonal reference frame, and y is an w-dimensional column vector composed of a number measurements, that combined, span the space of x. The basis vectors of the space of y are not necessarily orthogonal. A Gaussian disturbance is added to y such that 41 where y is the true value and dy is the disturbance. If by is small enough then (3.46) is approximated by x = f(l)+j[y)Sy , (3.47) where j (y ] is the Jacobian matrix of /(•) with respect to y given as df Ay = dy The mean of measurement x is determined using equation (3.45) as follows: £=£bl=/(>:) > and the variance is v\x\ = £^(x - x | x - xj ~^ = Jq\Jr, (3.48) (3.49) (3.50) where cr is the measurement uncertainty matrix or uncertainty ellipsoid matrix as defined in the previous section. These results allow the determination of the mean and uncertainty of the measurement in the space of x through the definition of /(•) and its Jacobian, ./(•). Consider again the example illustrated in Figure 3.17. For this problem /'(•) can be determined from geometry as, - f -X , = f *2_ m2 J From (3.48) the Jacobian is, J f V m, ) m. — (m2 - ml +d2) 2dy ' 2 ' -j={in2 - in2 + d2 )i + » i , V2 ffl, (3.51) ffl, (3,52) d d -^(m2x-m22+d2)^+\ ^(m2-m22+d2Y _ V2 V2 The mean of the laser range finder measurement can be expressed in the world frame, 3w, by using equation (3.51) while the uncertainty ellipsoid can be determined using the following equation: a -JCT,JT . where 2,= 0 at (3:53) (3.54) and crHI| and ami are the standard deviations of each of the laser range finder measurements. 42 3.2.2 Uncertainty Modification Consider the situation where a two-dimensional position sensor is based on processing camera image data. The performance of such a sensor is very dependent on light levels. If the workspace is too dark or light then predefined thresholds cease to be valid and the measurement's uncertainty increases. If an illuminometer, measuring the light level o f the workspace, is available, the uncertainty o f the position measurement can be adjusted so that it is more appropriate for the present operating conditions. A relationship between a sensor measurement of an operating parameter and the uncertainty of another measurement can be composed based on expert knowledge. An example o f such a relationship for the previous example is given below in Figure 3.18. This relationship can be generated using analytical, simulated and/or experimental results. The horizontal axis is the value of the sensed operating parameter. The vertical axis is the Uncertainty Adjustment Factor, a, which is the factor that the uncertainty of the sensor measurement should be scaled by due to the present operating conditions. Similarly, Mahajan et al. [26] have proposed a fuzzy logic based method that modifies the strength of a sensor's contribution using expert knowledge and complementary sensor measurements. This method was discussed in Chapter 2. Light level, L Figure 3.18: Certainty modification based on expert knowledge. The Uncertainty Adjustment Factor, a, is applied to the uncertainty ellipsoid matrix using the fol lowing equation: 2 =AaAr =AaA, (3.55) 43 where aA(/. is the adjusted uncertainty ellipsoid for the measurement, a is the uncertainty ellipsoid of the measurement, and A is a diagonal matrix formed from the Uncertainty Adjustment Factor, a, as follows: A = 4a ••• 0 , A G R'1 (3.56) 0 ••• 4a where n is the dimensionality of the measurement. Equation (3.55) must be applied in the principle axes reference frame of the uncertainty ellipsoid of the measurement, a , so that the scaling is uniform and no rotation is applied to the uncertainty ellipsoid, a is diagonal in this frame and therefore (3.55) can be simplified to 0 0 (3.57) 3.2.3 Range Enhancement As an example, one can consider an application where an overhead image is required to track an object's motion throughout the workspace. The required image resolution and size of the workspace make it impossible for a single camera to perform adequately. Multiple cameras can be used where the fused images cover the entire workspace, as shown in Figure 3.19. This is an example of complementary fusion for the purpose of sensory range enhancement. In order for range enhancing complementary data to be fused, the measurements must be taken in common dimensions and the individual sensing ranges must not overlap. If there is a common region, then the data in this region must be fused as redundant data, using the HGRF method presented previously. Furthermore, the complementary data must be expressed in the same format. Again, considering the multiple camera example, i f the cameras view the world at different scales then a direct combination of the images is not possible. First, the scales of the image must be matched and then complementary fusion can be performed. 44 Figure 3.19: Camera image range enhancement set-up. As a further example, consider a laser range finder and a linear potentiometer that sense the position of a trolley that moves along a short 200 mm linear track. Both of these measurements are one-dimensional and are valid over different ranges. The laser range finder has a range of 50 mm and senses the distance from 150 to 200 mm. The linear potentiometer has a range of 175 mm and senses the position from 10 to 150 mm. This is shown in Figure 3.20. Tro l ley Range o f M o t i o n (0 to 200mm) Potentiometer Sensing Range (0 to 175mm) Figure 3.20: Range Enhancing Complementary Fusion Example. If the laser range finder does not report a valid measurement (i.e. there is no object surface within the 50 mm sensing range) then the potentiometer measurement is used. If the potentiometer measurement is 45 out of range then the laser range finder measurement is used. If both measurements are in range then the data is redundant and the proposed redundant fusion method is used. 3.3 Projection This transformation is the projection of an «-dimensional ellipsoid into an (w-l)-dimensional ellipsoid. Additional projections can be used to further reduce the dimensionality of the data. For example, one can consider a three-dimensional measurement of the end-effector position of a three degree-of-freedom Cartesian manipulator. This measurement is used as the feedback signal for the one-dimensional axes controllers. However, the controllers only require the dimension that pertains to them and therefore the data must be projected onto the proper axis. Defining the projection as the total range of the ellipsoid along the dimension of interest is a conservative definition. There is a loss of information when performing this operation. This is most apparent when there is a strong correlation between two dimensions, i.e., the ellipsoid's principal axes are 45 degrees from the axes of interest. For example, the two-dimensional ellipsoid is projected onto the axes shown in Figure 3.21 and then a subsequent extension operation is performed to reconstruct the two-dimensional ellipsoid. It can be seen that the reconstructed ellipsoid is larger than the original and therefore some knowledge has been lost in the projection operation. x2 M i Figure 3.21: Projection and reconstruction of a two-dimensional ellipsoid. A search for a suitable mathematical method to accomplish the projection function was unsuccessful and therefore the following method is included for completeness. Projecting the ellipsoid into a lower dimensional space requires finding the maximum and minimum values of the ellipsoid along the dimensions of that space. Determining the maximum and minimum 46 values of an ellipsoid in a dimension is accomplished by taking the partial derivatives of the ellipsoid equation and solving for the minima as follows. The equation of an w-dimensional ellipsoid is given as: <P\,\ ••• <Pu 1, (3.58) JPn.\ <Pn,,_ where the square matrix is the inverse of the covariance matrix, a. Note that the origin of the ellipsoid is at the origin of the reference frame that this equation is expressed in. Multiplying the matrices in (3.58) yields the following equation for a rotated «-dimensional ellipsoid: +~' + xn<pn,l)+x2(xlipia +--- + xn<p,K2)+--- + xn(X](pln + . . . + x „ ^ „ „ ) = l . (3.59) Taking the partial derivative of (3.59) with respect to xk, —— , gives the following equation: dxk x\<Pk,i + x2<Pk,2 +••• + (*.0>u + x2<P2,k +••• + 2xk<Pk,k +••• + x„<p,Kk)+••• + xn(pnk = 0 . (3.60) Noticing that ;. = (Pj, dividing by a factor of 2 and putting (3.60) into matrix form yields: <Pk, i> <pk,k <Pk.n = 0. (3.61) This is the equation for the partial derivative, dx. of an «-dimensional ellipsoid. To determine the range of the ellipsoid in dimension /, one can take the partial derivative with respect to every dimension except /. This results in a system of n-l equations of the form of (3.61) as given below: [x, <P i.i <Pi-\,\ <PM,\ = [0 - 0]. (3.62) Solving this system of equations for x, results in n- \ equations of the form X ; = (3,-X, + bj (3.63) where a-, and 6, are constants. Substituting these equations into (3.59) yields the solution for X/ and produces the minimum and maximum values of the ellipsoid on dimension /. Substituting the minimum and maximum values into (3.63) results in the coordinates of the minimum and maximum points of the ellipsoid along dimension /. The above derivation needs to be performed for each dimension of ellipsoid for which a solution is desired. 47 For a two-dimensional ellipsoid the equations for the minimum and maximum values of both dimensions are: max(x, ) ,min(x,) = x i ± max(x 2), min(x 2) = x 2 ± -2 \ <P\\ <Pn <Pl2 (3.64) (3.65) where ( x i , X 2 ) is the center of the ellipsoid. The following are the similar equations for a three-dimensional ellipsoid: max(x,), min(x,) = xi ±3 Jcp2^ -<p^(p22 > max(x 2), min(x 2) = x 2 ± B^cp^ -g>n(pu , where B = max(x 3),min(x 3) = X3 ±fijcp22 -(p22<Pn , (3.66) (3.67) (3.68) (3.69) 2cpncp2Jcpu +cp22<Pfj +(Pn<Pu - <Pu<P22<Pii +<Pv<Pn In order to project an ^-dimensional ellipsoid into («-l)-dimensional space the coordinates of the maximum (or minimum) points in each dimension except the one being collapsed are required. The following expressions can be used to determine the con-esponding coordinates of the maximum and minimum points of a three-dimensional ellipsoid: l 2 L . = . .V] =max(.V| ) , m i n ( . T | ) 3 I.Vi=max(.V| ) , m i n ( . r j ) -XI ± = X 3 ± x2=max(.v2 ) , m i n ( x 2 ) = Xl ± 3 I .r2=max(.r, ),min(.v2) ' I .V3=max(.t3 ),min(.t3) 2 I ,v3=max(jr3 ),min(jr3) = X3 ± = x, + = X2 <Pll<Pl2 -<P22(Pn) V^223 - <p33<p22 P{<Pn<Px <Pis<P\\ -<P\\<Pn) 0{<Pl2<P2J -<P22<Pn) <P\\<P22 -<P\\<P2i) 4 (3.70) (3.71) (3.72) (3.73) (3.74) (3.75) 9>\2 -9\\<Pi2 4 8 Once the coordinates of the maximum (or minimum) points are determined an (w-l)-dimensional ell ipsoid is fit to these points by using equation (3.58) together with an expression for the maximum Or minimum value along one of the projected dimensions such as one of equations (3.63) to (3.68). A three-dimensional ell ipsoid projected into a two-dimensional ellipsoid is shown below in Figure 3.22 and a numerical example follows. Original Measurement Projected Measurement xi,, X | , i X | Figure 3.22: The projection of a three-dimensional ellipsoid into a two-dimensional ellipsoid. Consider a three-dimensional ellipsoid, centered at the origin, with the fol lowing uncertainty ell ipsoid matrix: 2 0 -0.7071 0 2 0.7071 •0.7071 0.7071 2 (3.76) The inverse o f this matrix is taken resulting in the ellipsoid equation in the form of (3.58) as follows: X\ X1} Xi 0.5833 -0.0833 0.2357 -0.0833 0.5833 -0.2357 0.2357 -0.2357 0.6667 = 1. (3.77) Projecting the ellipsoid onto the JC/-*.? plane requires the maximum points in the x\ and X2 directions. From equations (3.66), (3.67), (3.70) and (3.72) the points are: maxO,) = 1.4143, (3.78) max(x 2 ) = 1.4143. 49 (3.79) X2 ,,=U143 ~ 0 ' 11.r2 =1.4143 = 0 . (3.80) (3.81) Substituting these points in the general equation of a two-dimensional ell ipsoid, as is given in equation (3.58), provides two equations. Using equation (3.64) provides a third equation that allows the parameters o f the two-dimensional ellipsoid equation to be solved as follows: I- |0.50 Ix, x21 1 1 21 0 0 0 .50 1. The corresponding uncertainty ellipsoid matrix is given as: "2 0~ wa2D = 0 2 (3-82) (3.83) • This particular example results in a circular ellipsoid, however this is not generally the case. 3.4 The Ellipsoid Representation and Non-linear Transformations Consider the situation where a laser range finder is used to measure the position of an object, as is depicted in Figure 3.23. The measurement consists of the distance, r, and the angle of the laser from a known reference, 6. Figure 3.23: Laser Range Finder Measurement. A Gaussian distribution is assumed to describe the uncertainty of both of these variables. Therefore, in the (r, 9) space an ellipsoid accurately describes the measurement uncertainty. However, the information is needed in (x,y) Cartesian space and therefore the fol lowing transformation is applied: 50 r = ^x2 + y2 (3.84) e = cos' v ^ 7 (3.85) This transformation is non-linear and therefore the shape of the two-dimensional Gaussian surface is not maintained in the (x,y) space. The equation of the two-dimensional Gaussian expressed in (x,y) space is given below and the distribution is shown in Figure 3.24. Cos' 2a' 2a{, 2.7ZOr<y 0 -e v (3.86) 15 15 Figure 3.24: Measurement Uncertainty in (x,y) Space with Large Uncertainties (a r =l , a e =7t/4, r m e a „=10, Oinean"~0)-Clearly, an ellipsoid representation of the uncertainty of this measurement in (x,y) Cartesian space i not valid for large uncertainties. An ellipsoid representation can be used as an approximation, howevei when the measurement uncertainty is small as is the case in Figure 3.25. 51 15 15 Figure 3.25: Measurement Uncertainty in (x,y) space with Small Uncertainties (ar -I, ae =n/l2, r m e a n =10 , Therefore, caution must be exercised when applying non-linear transformations to data represented using uncertainty ellipsoids. 3.5 Summary In this chapter a redundant fusion method, referred to as the HGRF method, has been presented that provides reliable results for all measurement cases. The method is based upon GRF method [31] and assumes that the ellipsoid representation adequately describes the uncertainty of the measurements. Further, three specific cases of complementary data fusion have been presented in this chapter. A dimensional extension function has been presented that is useful for combining measurements taken of the same feature in non-overlapping measurement spaces. An approach has also been presented where a sensor's measurement can be used to modify the uncertainty of another's measurement. Additionally, a method for combining different measurements taken in non-overlapping ranges in an overlapping measurement space has been presented. Finally, a method for projecting an ellipsoid into a lower dimension has been developed. A l l the methods presented in this chapter are applicable to the fusion of measurement data taken in spaces where the ellipsoid representation suitably specifies the uncertainty of the data. Caution should be exercised when applying these methods in non-Cartesian measurement spaces and when using non-linear transformations between spaces. In such cases the ellipsoid representation is an approximation that may or may not be acceptable to the application. 52 Chapter 4 4 The Encapsulated Logical Device Architecture The Encapsulated Logical Device (ELD) Architecture is a framework developed to improve the design and enhance the operational reliability of industrial automation workcells. The ELD is based upon the LS concept and includes additional functionality making it a versatile device suitable for industrial automation workcell applications. The ELD Architecture provides a suitable framework in which the fusion mechanisms developed in Chapter 3 can be implemented. While this thesis focuses on industrial automation workcells, this work could be extended and applied to other systems such as autonomous robotics and distributed systems. This chapter describes the functionality of the ELD object and then details the ELD Architecture. A short discussion explaining the role of developers and users of this technology is included at the end of this chapter. Examples of implemented ELDs and an ELD Architecture are included in Chapter 6. 4.1 The Encapsulated Logical Device This section begins with a specification for the design of the general ELD object. This encompasses design principles, functions and data that are common to all ELDs. Following this, the details of the components of the ELD object design are discussed. 4.1.1 Specifications Based on the research discussed in Chapter 2 the following specifications are proposed for the design of the ELD object: 53 1) Each ELD has a common framework. This is a fundamental property of the LS resulting from OOD [37][11]. A common framework allows designers to easily implement and modify all ELDs. A l l common features of the E L D object should be derived from a common class. This specification should also be maintained in cross platform design, where possible. This would require developing an E L D base class in every platform supported. 2) Each ELD possesses all required knowledge. There should be no centrally stored information used by multiple ELDs. Again this is founded in the LS design [11]. Each E L D should possess all information that it requires to perform its tasks. Some information, such as sensor input, can be received through communication channels. 3) Each ELD only has knowledge relevant to the scope of that ELD. A l l knowledge that is contained within an E L D should be relevant to the functions of that E L D . An E L D should have no knowledge of another ELD's functions or data. ELDs are unaware of the internal workings of all other ELDs. This is again included in the LS design [11]. 4) Each ELD is capable of being a stand-alone Sensor-Actuator Feedback Control Loop. Each E L D should have sensor input and the ability to process this input. Further, each E L D should be able to supply a controlling signal to an actuator. Budenski and Gini in [8] proposed this concept. Actuators are not required in every E L D , however a sensor is required. Further, the E L D should possess all high-level intelligence required for independently reliable operation. 5) All data is represented as uncertain. A l l sensory and processed data within the E L D should be represented as uncertain. Also, all data processing within the E L D should consider the level of certainty of the data used. This is done to increase the operational reliability of the automation workcell. The importance of this concept was . stated in [13]. Many approaches to uncertainty representations have been proposed as was discussed previously in Chapter 2. 6) Each ELD can have many inputs and many outputs. Again this is based on the original LS specification [11]. 7) ELDs are only able to communicate with ELDs that are directly connected. ELDs, as with LSs [11], should not be able to send messages or data to another E L D that is not directly connected to it. ELDs should not even be aware of the existence of unconnected ELDs. However, communication between indirectly connected ELDs can occur through a chain of connected ELDs. For example, a high level E L D could request data from its input ELDs, which in turn request data from their input ELDs. The data is thus passed through the chain. 54 The following specifications have not been explicitly stated in other architecture specifications in the literature, however, the concepts are not novel. These specifications are specific to the implementation of the E L D object. 8) Communication between ELDs can be commands and responses. ELDs are able to send commands to ELDs that provide them input. In reply, the E L D that provides the input sends a response at the request of the commanding E L D . This is the normal direction of communication between ELDs. 9) Communication between ELDs can also be requests and replies. Requests and replies, as they are called, are in the opposite direction as commands and responses. An E L D can issue a request to an E L D to which it supplies output. A reply is in turn sent to the requestor. This direction of communication is required for specific implementation circumstances as is discussed further in the Communication section. 10) ELDs can be implemented on different platforms. The E L D design methodology should not be platform specific. However, support of different. hardware and software platforms requires developing E L D object programming code for that platform. Reliable communication channels should exist on and between each platform for connecting ELDs. 11) Each ELD is an independent process. Each E L D should be implemented as a separately executing entity. This would allow each E L D to be executed on separate processors, i f available. Thus, the possibility of parallel processing would exist, enhancing the speed of operation. However, ELDs should be implemented in a fashion allowing a single processor system to be used as well. 12) A "drag and drop" Graphical User Interface (GUI) design can be used to implement core ELD functionality. To allow for rapid development of an E L D Architecture application, using a GUI tool, a uniform and sufficiently general design should be adopted for the ELD. This is facilitated by uniform data and functions and systematic naming conventions. 4.1.2 ELD Components The E L D has been implemented according to the specifications listed in the above section. The OOD approach was utilized, which ensures that specifications 1 through 3 are met. Therefore, each E L D implemented has a modular and self-contained common framework. Figure 4.1 is a diagram of the general E L D object. The three main components of an E L D are discussed below: the sensor, manager and actuator. 55 Commands and Replies Manager Responses and Requests Encapsulated Logical Device Knowledge Base* Planner Communications* Actuator** Controller* Output Driver** Sensor* Processing* Fusion* Input Driver* Requests Control Signal Sensor Input "Detailed Design Provided. ** Framework Provided. Commands Figure 4.1: The Encapsulated Logical Device (ELD). 4.1.3 The Sensor The sensor component of the ELD contains the ELD's sensing abilities and data. This component fulfils half of specification 4. Included in the sensor component are the data input driver, fusion and processing functions, which are discussed below. The sensor functions and data are defined in the E L D base classes as can be viewed in Appendix A. 4.1.3.1 Sensor Input Driver Sensor input can come directly from physical sensors, such as an encoder. If this is the case then the Sensor Input Driver directly interfaces with the sensor hardware and must be implemented individually for each E L D . A library of Sensor Drivers could be developed to allow efficient implementation when hardware common to other applications is used. ELDs with physical sensor inputs are at the lowest level in the architecture and can be thought of as highly functional drivers. Alternatively, sensor input can also be received from other ELDs. In this case the Sensor Input Driver interacts with a "virtual" or Logical Sensor and receives data through the manager's communication channels, which are discussed later. The Sensor Input Driver decodes received sensor input and maps it 56 to an internal data structure. In this case, the Input Driver can be configured automatically for each E L D , according to specification 12. Further, in agreement with specification 6, the Sensor Input Driver is able to connect to multiple physical and Logical Sensors. 4.1.3.2 Fusion Module According to specification 5, all data within the E L D Architecture is represented as uncertain. Therefore, a mechanism must be included in the E L D that enables the systematic combination of uncertain data. The uncertainty ellipsoid representation, as discussed in Chapter 3, is adopted as the uncertainty representation used in the E L D . Further, the fusion mechanisms presented in Chapter 3 are utilized within the ELD's Fusion Module. These fusion mechanisms are only applicable to specific cases of data fusion. Fusion cases encountered in the implementation of a specific E L D that cannot be handled using these mechanisms must be handled individually. Each ELD' s Fusion Module must be configured individually according to the sensor data that is received by that E L D . The Sensor Fusion Module of each ELD is internally a hierarchy of fusion mechanisms. An example sensor fusion module is given in Figure 4.2. The fusion mechanisms available include multi-dimensional redundant fusion using the HGRF method, dimensional extension, range enhancement, uncertainty modification and projection, as detailed in Chapter 3. The Fusion Module, using these fusion mechanisms, can be configured using a GUI tool based on specification 12. Once the sensor input is received, the data is fused in the fusion module and is then passed to the processing module. 57 X Y Project ion / J M , ft 1. Project ion R e d u n d a n t J Uncertainty Modification E x t e n s i o n L o Figure 4.2: Example ELD Sensor Fusion Module. 4.1.3.3 Processing Module The Processing Module processes the fused data into sensor output, according to specification 4. This is the module in which the customized ELD sensor functions are defined, making each ELD's sensor component unique. Since these processing algorithms are defined specifically for each ELD, the base ELD class provides an empty function that must be developed by the ELD designer. The uncertainty of the data used must be considered in the algorithms developed. Additionally, the sensor output produced must have an associated uncertainty quantification, according to specification 5. The specific function used to determine this uncertainty depends on the processing algorithm used. In general, the output uncertainty is a function of the input and processing uncertainty: ° out =/(o"P^m)- (4-1) where aml is the uncertainty of the output, <Jp is the uncertainty of the process, and crjlt is the uncertainty of the input. This function may be determined through analytical and experimental methods on a case-by-case basis. The output of the Processing Module is available to the ELD's actuator component and to other ELDs that are connected as outputs. 58 4.1.4 The Manager The Manager is responsible for all high-level functions of the E L D including communicating with other ELDs, interpreting messages received from other ELDs, planning and maintaining a knowledge base. Detailed designs for the Interpreter and the Communications modules are included in this work. Only a framework has been provided for the Planner and Knowledge Base modules of the manager. A l l manager functions and data are included in the E L D base classes, as shown in Appendix A. 4.1.4.1 Communication The Communication module has the ability to send commands to input ELDs and requests to output ELDs according to specifications 8, and 9. Additionally, according to specification 7, this module does not allow the E L D to communicate with ELDs that it is not connected to. The Communication module is common to all ELDs and is fully implemented in the base E L D class. The Communication module packages messages and accesses inter-ELD communication channels. The mechanism used for inter-ELD communication depends on the platforms the system is implemented on. In this implementation, based on a Microsoft NT system, "Pipes" are used as a reliable communication channel. They are accessible from any NT application both locally and across a network. However, i f real-time performance is required a different platform such as the Open Real-Time operating System (ORTS) [4] executing on a Digital Signal Processor (DSP) would be required. NT pipes have also been implemented to provide communication between ORTS and NT based applications. The format of inter-ELD messages is as follows: Parameter Description Sender Send ing E L D N a m e Rec ip ient Rece i v i ng E L D Name Message Command , Response, Request or R e p l y Handshak ing Handshak ing Type Parameter 1 Data Va l ue Parameter 2 Data Va l ue • • • Parameter (n-1) Data Va l ue Parameter n Data Va l ue Da ta M e m o r y Address The first two parameters indicate where the message originated and for whom it is intended. The unique ELDName tag of each E L D is used for these parameters. The third parameter, Message, indicates the subject of the communication. A message can be a command, response, request or a reply. Examples of some commands are 'Send Output', 'Change Set Point' and 'Calibrate'. A list of possible commands is included in the E L D base class data. This list can be appended, as applications require. In keeping with the specifications, commands sent can have no knowledge of the internal workings of a connected E L D . For example, a command such as 'Increase 59 Image Resolution' would not be valid while 'Increase Sensor Certainty' would be. Commands are the most common message sent by the communication module, however, requests are also possible. Requests are messages sent to output ELDs making a special request for information or data. A list of requests is maintained in the ELD base class data. This list is specific to the application implemented, as there are no requests common to all ELDs. The ability to send requests is required for a low-level ELD to request action or information from a higher-level ELD. One can consider the situation where a camera ELD has been commanded to calibrate but a manipulator must move out of view first. The commanding ELD has no knowledge of the calibration procedure and cannot know if the manipulator must move. Therefore, the camera ELD must first request the manipulator to move and then perform the calibration. The communication module also sends responses and replies to commanding or requesting ELDs. These messages include requested information. The reply that is sent depends on the handshaking mode indicated in the command, which is the fourth parameter. The handshaking modes possible are no reply, reply when the message is received and reply when the task is complete. The application developer can add additional modes and the mode used for each message is the designer's choice. If handshaking is desired, the executing thread of the commanding or requesting ELD is placed in a waiting mode. The wait ends when the appropriate response is received, or when a predefined amount of time has elapsed, indicating failure. The next n parameters are available for data passing. These are defined specifically for each message. The final parameter, Data Address, is useful for passing data structures of undefined type. This is the parameter that is used to pass sensor output data structures in reply to a 'Send Output' command. 4.1.4.2 Interpreter, Planner and Knowledge Base The Interpreter monitors the communication channels that are connected to the ELD for commands and requests received. The messages received are decoded, checked for validity and routed to the Planner. The base ELD design provides for interpretation of messages that are common to all ELDs., Messages specific to an ELD must be manually added to the Interpreter on a case-by-case basis. The Planner determines what action the ELD should take upon the receipt of commands and requests. The Planner, together with the Knowledge Base, provides the independent operation required of specification 4. The detail of this component is out of the scope of this thesis and only a framework has been provided in this ELD implementation. Presently, the Planner tasks are hardcoded in the Interpreter component. Bayesian and Dempster-Shafter approaches to uncertain reasoning as seen in the SFX Architecture [28], and approaches such as Budenske and Gini's in [8] could be used for this component, where a high level goal, received as a message, is decomposed into a detailed plan. The plan could include sensing and actuation tasks as well as the allocation of the ELD's physical resources. This module interacts with the Knowledge Base to provide high-level intelligent behaviour in the ELD. 60 The Knowledge Base is a depository for data and knowledge, describing the environment relevant to the scope of the E L D . Again, only a framework for this component is provided and the detailed design of this component is out of the scope of this thesis. The Knowledge Base could be hard coded or gathered through sensing and reasoning. This component can be described as a world model. This includes issues such as how to acquire pertinent information and manage acquired uncertain knowledge as it is gathered. Work in the area of decision support and diagnostic expert systems is of interest for the design of this component [38]. 4.1.5 The Actuator The Actuator component of the E L D is responsible for controlling the physically connected actuators, as required by specification 4. A framework for this component is included in this work but the details of its design is not addressed. ELDs that have a direct connection to an actuator form a feedback control loop. The actuator component contains an Output Driver and a Controller. Not every E L D has an implemented Actuator component but only those directly connected to a physical actuator. The actuator framework is included in the E L D base classes given in Appendix A. The Output Driver interfaces the E L D with the physical actuator. This involves communicating with output hardware such as an I/O card, DSP or parallel port. Implementation of this function depends on the hardware used and thus must be individually designed for each E L D . A library of Output Drivers could be developed to allow efficient implementation if hardware common to other applications is used. The implementation of the Controller is again accomplished on an individual E L D basis and has not been detailed as a part of this work. The Controller includes algorithms for low-level control, high level tuning and controller selection. The uncertainty quantification of sensor data and information from the Planner and Knowledge Base can be used in the control process. A library of control algorithms, for use with a GUI E L D design tool, could be developed to increase the efficiency of the design of ELDs. 4.1.6 The Executing Thread Each E L D is implemented as a separate thread that runs in parallel to all other E L D threads, in accordance to specification 11. The primary E L D execution loop continually monitors all communication channels associated with that E L D . When a message is received, it is interpreted and the planner takes the appropriate action. This could involve a series of sensor and actuator tasks. Once the plan is completed the executing thread returns to monitoring the communication channels until the next message is received. 61 4.2 The ELD Architecture This section begins with the specification for the design of the E L D Architecture. This governs E L D connections and the run-time GUT. A discussion about the components of the E L D Architecture follows. 4.2.1 Specifications The following specifications govern the design of the E L D Architecture: 13) ELDs are connected hierarchically. The E L D Architecture consists of a hierarchical connection of ELDs, each designed for a specific purpose. Outputs of an E L D must be connected to a higher level E L D . Similarly, inputs of an E L D must be connected to a lower level ELD. There are no restrictions on how many levels may be spanned by connections. This is similar to the LS specification [11]. 14) The abstraction of the ELD's data increases with the level of the hierarchy. The lowest level of the hierarchy contains ELDs directly connected to sensors and actuators. These low-level ELDs are directly associated with physical sensor data. As the level of the hierarchy increases, the abstraction of the ELD's output data increases. This is a concept proposed for LSs in [11]. For example, a low-level E L D may output pixels, a medium-level E L D may output edges and a higher-level E L D may output an object description. The highest-level in the architecture contains an E L D that is responsible for system planning and the overall supervision of the system. 15) There can be no circularly connected ELDs. ELDs cannot be connected in a circular fashion where the output of an E L D is eventually fed into its own input. This creates a loop and must be avoided. The following specifications have not been explicitly stated in other architecture specifications in the literature, however, the concepts are not novel. These specifications are specific to the implementation of the E L D Architecture. 16) A run-time GUI is required. The E L D Architecture must supply a run-time GUI that is modifiable by E L D Architecture application developers. This is required so that developers can easily customize user interfaces for their applications. The run-time GUI enables users to command the automation system and receive information about the performance and state of the system. The E L D Architecture must also supply an interface between the ELDs and the run-time GUI. An easy to use mechanism for relaying data and commands between the architecture and the GUI is required. 62 17) Implementation of ELD Architecture functionality should allow for "drag and drop " Graphical User Interface (GUI) design. To allow for rapid development of an E L D Architecture application, using a GUI tool, a uniform and sufficiently general design is required for the connections and GUI. 4.2.2 ELD Architecture Components The Architecture class contains the E L D objects developed for an application, and specifies the connections between those ELDs, according to specifications 15. Adding ELDs and connections between them is accomplished by the application developer. It is (heir responsibility in designing their application to adhere to specifications 13 and 14. Further, in keeping with specification 17, these additions can be made using a GUI tool described below. The architecture has been implemented using OOD. The E L D Architecture also includes a run-time GUI and establishes a communication channel between the ELDs and the run-time GUI, as is required by specification 16. Base functionality of the GUI is provided but the application specific details need to be added to the GUI on a case-by-case basis. The developed GUI would display any desired system information and allow issuing predefined commands to the ELDs. This makes system information available to the user and allows a user to control the operation of the system. 4.3 Cross-Plat form Implementation The E L D Architecture classes must be developed by the E L D Architecture provider for each platform supported. This is a one-time development requirement. Additionally, reliable communication channels must be established within and between each of these platforms. Ideally, a sophisticated GUI Architecture Builder Tool would enable seamless cross-platform implementation. The tool would automatically generate the code appropriate for the ELD's platform. Additionally, the tool would automatically setup the communication channels between ELDs, no matter what platform they reside on. Of course, the tool would have to be designed to support all desired platforms. Presently, the E L D Architecture is implemented on the Microsoft NT 4.0 platform and communication to the ORTS platform is also supported. 4.4 ELD Architecture Builder A GUI based E L D Architecture Builder Tool has been developed to support this work [19]. This is a tool that rapidly generates Visual C++ code in the Microsoft Visual Studio programming environment. This is a first implementation of the tool and many additional features could be added. This present tool is able to generate an E L D Architecture application, complete with ELDs with core functionality, connections between the ELDs and a basic run-time GUI. Specific functionality of the ELDs and GUI must be added manually using Microsoft Visual Studio. The automatic generation of C++ 63 code begins with E L D and E L D Architecture shell files. Shell files are C++ files containing the outlines of classes that must be coded to create an E L D Architecture application. The Architecture Builder Tool completes the detail of the shell files according to the choices that the user makes with the GUI. Additionally, the Architecture Builder Tool automatically configures the Microsoft Visual Studio project and workspace to enable manual additions to the classes. A library system has been implemented to enable the reuse of old applications. The library contains shell files of ELDs that were designed for specific tasks. As new ELDs are created for specific applications they can be added to the library as shell files. Those shell files are available in the Architecture Builder Tool as a base for development of new ELDs. This increases the efficiency of implementation by allowing the reuse of code. Further development of such a tool would be beneficial. The most important areas of development are the creation of additional E L D shell files and adding support for additional platforms. 4 . 5 The Structure of the ELD Architecture Product Development There are three groups involved in the development and application of this technology. These included the E L D Architecture Provider, Application Developers and Users. The ELD Architecture Provider develops the E L D specification and base E L D classes. Currently the Industrial Automation Laboratory is the E L D Architecture Provider, however, with appropriate licensing an industrial partner would fulfill this function. Further advances in the capabilities and platforms supported can be made in the specification at this level. Additionally, tools used by application developers such as GUI based rapid implementation tools, diagnostic tools and libraries of pre-designed ELDs are developed at this level. ELD Architecture Application Developers are industrial engineering teams that develop automation, systems. These would be companies that use the E L D specification as a development base for their automation systems. These parties would have access to all E L D code specific to their application but would not be able to modify the E L D base classes provided by the E L D Architecture provider. A l l the development tools provided would be available for use at this level. The ELD Architecture User is the end user of the product. This group includes parties that purchase automation systems developed by the application developers. An E L D that has been defined for a specific task appears as a black box within the architecture at this level. The application developer can package the E L D objects so that the internal workings of the E L D are not modifiable by the end user. The User is able to monitor the performance of the ELDs through diagnostic tools provided. This allows access to all automation process performance data required. The E L D Application Developer and User could separate or independent organizations. 64 4.6 Summary The E L D Architecture allows systematic and efficient implementation of automation workcells across multiple hardware and software platforms. The E L D Architecture is based upon the LS and includes sensor and actuator functionality. The quantification of the uncertainty of all sensor data and the utilization of the sensor fusion mechanisms detailed in Chapter 3, will enhance the operational reliability of industrial automation workcells. In this chapter, specifications for the E L D and E L D Architecture, based on the literature discussed in Chapter 2, have been presented. Further, the framework of the E L D and E L D Architecture, according to these specifications, has been presented. The details of some implemented components have been included as well. Further, a GUI based Architecture Builder tool has been briefly described that enables rapid development of E L D Architecture applications. 65 Chapter 5 5 Redundant Fusion Simulation Results Simulation results of the HGRF algorithm developed in Chapter 3 are presented herein. This discussion of results further demonstrates the effectiveness of the HGRF method in handling varying levels of sensor measurement disparity. First, the results and discussion of the fusion of two one-dimensional measurements are presented. Following this, examples of the fusion of two and three two-dimensional redundant measurements are discussed. A summary of the simulation results concludes this chapter. 5.1 Fusion of Two One-Dimensional Measurements This section presents two series of results of the redundant fusion of two one-dimensional measurements. The first series consists of two measurements of equal variance with different means. The second series involves two measurements with equally separated means and differing variances. The redundant fusion algorithm used to generate these results is included in Appendix B. All measurements in this section are displayed as a Gaussian distribution rather than the ellipsoid depiction, which is a line with error bars for the one-dimensional case. •'.' The first series is shown in Figure 5.1 and the second series is shown in Figure 5.2. The two measurements are shown with dashed lines and the HGRF result is shown with a solid line. The GRF result is included as a dotted line for comparison. The corresponding data is included in Table 5.1. 65^ (e) (f) Figure 5.1: Fusion of Two One-Dimensional Measurements - Series One. Data for Figures 1(a) through (0 is given in Table 5.1. Measurements are shown as dashed lines, HGRF result as a solid line and GRF result as a dotted line. 66 0 2 4 6 8 10 12 14 16 18 (b) 0 7 . 0.6 . 0.5- 0.5 Figure 5.2: Fusion of Two One-Dimensional Measurements - Series Two. Data for Figures 2(a) through (d) in Table 5.1. Measurements are shown as dashed lines, the HGRF result as a solid line and the GRF result as a dotted line. Fusion Case Measurement 1 Measurement 2 HGRF Result GRF Result Mean Variance Mean Variance Mean Variance Variance Figure l a 5 1 5 1 5 0.5 0.5 Figure lb 5 1 5.5 1 5.25 0.538 0.5 Figure lc 5 1 6 1 5.5 1 0.5 Figure Id 5 1 6.5 1 5.75 2.337 0.5 Figure le 5 1 7 1 6 4 0.5 Figure If 5 1 10 1 7.5 12.25 0.5 Figure 2a 5 1 8 1 6.5 6.25 0.5 Figure 2b 5 1 8 2 6 7.771 0.667 Figure 2c 5 1 8 4 5.6 7.744 0.800 Figure 2d 5 1 8 10 5.272 5.758 0.909 Table 5.1: One-Dimensional Redundant Measurements and Fused Results - Data. 0 2 4 6 8 10 12 14 16 18 (a) 0.7 • 0.6 67 5.1.1 Discussion of Results In the first series of fusion examples the HGRF result's uncertainty increases as the measurements move apart, while the GRF result has equal variance for each of these cases. It can be seen from Figure 5.1(a) that when the measurements are coincident the HGRF and GRF results are identical. This satifies case 1 of the Heuristic Specification of Section 3.1.3. Figure 5.1(b) demonstrates case 2 of the Heuristic Specification as the measurements are in "agreement" and the uncertainty of the HGRF result is less than the measurement's uncertainties. Figure 5.1(c) represents the borderline between cases 2 and 3 of the Heuristic Specification and the HGRF result's uncertainty is equal to the measurement's uncertainties. Figure 5.1(d) demonstrates case 3 of the Heuristic Specification as the measurements "disagree" and the uncertainty of the HGRF result is larger than the measurement's uncertainties. Figure 5.1(e) represents the borderline between Heuristic Specification cases 3 and 4 since the measurements are spaced by the sum of their standard deviations. The HGRF result satisfies the Heuristic Specification as it spans the range of the measurement's uncertainty ellipsoids. The final result displayed in Figure 5.1(f) corresponds to case 4 of the Heuristic Specification. Again the HGRF result satisfies the Heuristic Specification as it spans the range of the measurement's uncertainty ellipsoids. Figure 5.2 displays a sequence of fused measurements where the means of the measurements are unchanged and the difference in the measurement's uncertainties increases throughout the sequence. As the second measurement becomes more uncertain the fused mean approaches the first measurement's mean. The uncertain measurement's contribution is increasingly discounted as it becomes relatively more , uncertain. This is a desired outcome resulting from the GRF method of calculating the fused mean. The measurements in Figure 5.2(a) and (b) represent case 4 of the Heuristic Specification. Figure 5.2(c) represents the borderline between cases 3 and 4 while Figure 5.2(d) demonstrates case 3 of the Heuristic Specification. The HGRF result of each of these cases is more uncertain than the minimum measurement uncertainty. As the variance of the second measurement increases throughout the series, the HGRF result approaches the GRF result, which correctly approaches the uncertainty of the first measurement. At the extreme case where one measurement's uncertainty is infinity, the HGRF result and the GRF result equal the less uncertain measurement. This is an expected result since i f two measurements of vastly differing uncertainties are fused the highly uncertain measurement should be ignored. A discrepancy between the HGRF results and the Heuristic Specification occurs when measurements have differing uncertainties, as is demonstrated in Figure 5.2(c) and (d). In these examples, the mean is pulled closer to the more certain measurement, while the fused uncertainty is determined by using the maximum Measurement Spacing Vector. As a result the HGRF method's fused ellipsoid overlaps the more certain measurement and leaves the more uncertain measurement partially uncovered. To address 68 this problem, a non-Guassian distribution is required so that the GRF mean can be maintained while the fused uncertainty region still spans the range of the measurement's ellipsoids. However, this deviation from the Heuristic Specification is accepted since Guassian distributions have been chosen as the general representation for uncertainty in this approach. The above simulation results indicate that the HGRF method satisfies the Heuristic Specification for one-dimensional redundant fusion except in the case of differing measurement uncertainties. 5.2 Fusion of Two Two-Dimensional Measurements This section examines examples of the fusion of two two-dimensional measurements using the HGRF method. Three sets of results are given in Figure 5.3, 5.4 and 5.5. The corresponding data is given in Table 5.2. The Matlab program used to generate these results is included in Appendix B. (c) (d) Figure 5.3: Fusion of Two Two-Dimensional Measurements - Results Set One. Data for Figure 3(a) through (d) is given in Table 5.2. Measurements are shown as dashed lines, the GRF result is shown as a dotted line and the HGRF result is shown as a solid line. 69 4 5 6 7 8 (a) 10 11 12 7 8 9 (c) 10 11 12 3 4 5 6 7 8 (b) 9 10 11 12 3 4 5 6 7 8 9 10 11 • 12 (d) Figure 5.4: Fusion of two two-dimensional measurements - Results Set Two. Data for Figure 4(a) through (d) is given in Table 5.2. Measurements are shown as dashed lines, the GRF result is shown as a dotted line and the HGRF result is shown as a solid line. 70 Figure 5.5: Fusion of two two-dimensional measurements - Results Set Three. Data for Figure 5(a) through (d) is given in Table 5.2. Measurements are shown as dashed lines, the GRF result is shown as a dotted line and the HGRF result is shown as a solid line. 71 Fusion Case Measurement 1 Measurement 2 H G R F Result G R F Result Mean Variance Mean Variance Mean Variance Variance Figure 3a [5 5] [10] [0 1] [5 5] [1 0] [0 l] [5 5] [0.5 0] [0 0.5] [0.5 0] [0 0.5] :• Figure 3b [5 5] [10] [0 1] [6 5] [l 0] [0 1] [5.5 5] [1 0] [0 0.5] [0.5 0] [0 0.5] . Figure 3c [5 5] [10] [0 1] [7 5] [1 0] [0 1] [6 5] [4 0] [0 0.5] [0.5 0] [0 0.5] Figure 3d [5 5] [10] [0 1] [10 5] [1 0] [0 1] [7.5 5] [12.25 0] [0 0.5] [0.5 0] [0 0.5] Figure 4a [5 5] [10] [0 1] [5 5] [1 0] [0 1] [5 5] [0.5 0] [0 0.5] [0.5 0] [0 0.5] Figure 4b [5 5] [10] [0 1] [5.707 5.707] [1 0] [0 1] [5.354 5.354] [0.75 0.25] [0.25 0.75] [0.5 0] [0 0.5] Figure 4c [5 5] [10] [0 1] [6.414 6.414] [1 0] [0 1] [5.707 5.707] [2.25 1.75] [1.75 2.25] [0.5 0] [0 0.5] Figure 4d [5 5] [10] [0 1] [8.5355 8.5355] [1 0] [0 i] [6.768 6.768] [6.38 5.88] [5.88 6.38] [0.5 0] [0 0.5] Figure 5a [5 5] [10.1] [0.1 4] [7 6] [11] [1 9] [7.143 5.914] [2.10 3.16] [3.16 9.43] [0.77 0.26] [0.26 0.69] Figure 5b [5 5] [4 1] [1 1] [6.5 7] [1 1] [1 8] [5.923 4.568] [4.13 1.66] [1.66 3.48] [0.48 0.19] [0.19 2.74] Figure 5c [5 5] [9 1.5] [1.5 1] [8 9] [• 1] [1 4] [6.098 5.402] [1.36 0.97] [0.97 2.12] [0.76 0.24] [0.24 0.76] Figure 5d [5 5] [9 1.5] [1.5 1] [5 7] [1 1] [1 4] [4.914 5.071] [0.77 0.26] [0.26 0.71] [0.77 0.26] [0.26 0.69] Table 5.2: Fusion of Two Two-Dimensional Redundant Measurements - Data. The first two sets of fusion results, given in Figure 5.3 and Figure 5.4, demonstrate the reference frame independence of the HGRF method. These sets of measurement data are identical except that they are expressed in reference frames that are rotated with respect to one another. The results generated by the HGRF method for both sets of measurements are also identical relative to an aligned frame of reference. Therefore, the HGRF method produces consistent results regardless of the frame of reference used to express the data. This result requires that the frames of reference used are linear orthogonal frames and that all measurements to be fused are expressed in the same frame of reference. The third set of data, shown in Figure 5.5, demonstrates the multi-dimensional characteristics of the fusion method. The measurements in Figure 5.5(a) are non-overlapping and the resulting HGRF fused ellipsoid has a large uncertainty in the vertical dimension and a smaller uncertainty in the horizontal dimension. This results from a large separation in the vertical dimension, while the measurements overlap along the horizontal dimension. The fusion of these two measurements also demonstrates the previously mentioned discrepancy between the HGRF method and the Heuristic Specification. The HGRF result protrudes below the measurements more than is required by the specification. In Figure 72 5.5(b) the measurements are again non-overlapping. The HGRF result is more uncertain in the horizontal dimension while being less uncertain in the vertical dimension. This results from the overlap in the measurement's vertical dimension and lack of overlap in the measurement's horizontal dimension. Figure 5.5(c) displays two measurements that overlap. The HGRF result displays increased uncertainty over the GRF result. This results from the amount of separation of the measurement's means. Figure 5.5(d) displays the fusion of two measurements that have a small separation of means relative to their uncertainty. In this case the HGRF result is very close to the GRF result and exhibits a large decrease in uncertainty relative to the measurements' uncertainties. The examples in this section demonstrate the HGRF method's consistent output regardless of the frame of reference used to express the measurements. Further, these results demonstrate the method's ability to operate in higher dimensional measurement spaces. 5.3 Fusion of Three Two-Dimensional Measurements This section provides examples of the fusion of three two-dimensional redundant measurements by using the HGRF method. The Matlab program used to generate these results is included in Appendix B. Four fusion examples are displayed in Figure 5.6 through Figure 5.9 and the corresponding data can be viewed in Table 5.3. 3 2 < ^ 1 I 1 I : I I J 1 | 1 2 3 4 5 6 7 8 9 10 Figure 5.6: Fusion of Three Two-Dimensional Measurements - Example One. Measurements are shown with dashed lines, the HGRF result with a solid line and the GRF result with a dotted line. 73 10i 9 8 3 ^ \ 2 11 2 3 4 5 6 7 8 9 . 10 Figure 5 . 7 : Fusion of Three Two-Dimensional Measurements - Example Two. Measurements are shown with dashed lines, the HGRF result with a solid line and the GRF result with a dotted line. Figure 5 . 8 : Fusion of Three Two-Dimensional Measurements - Example Three. Measurements are shown with dashed lines, the HGRF result with a solid line and the GRF result with a dotted line. 74 10, L 1 1 I i i i i i i 1 2 3 4 5 6 7 8 9 1 0 Figure 5.9: Fusion of Three Two-Dimensional Measurements - Example Four. Measurements are shown with dashed lines, the HGRF result with a solid line and GRF result with a dotted line. Fusion Case Measurement 1 Measurement 2 Measurement 3 HGRF Result GRF Result Mean Var Mean Var Mean Var Mean Var Variance Figure 6 [5 5] [1 0] [0 1] [4.5 5.1] [10] [0 1] [5.5 5.5] [10] [0 1] [5 5.2] [1.10 0.44] [0.44 0.64] [0.33 0] [0 0.33] Figure 7 [5 5] [1 1] [16] [5.4 5.2] [3 0.7] [0.7 1.5] [4.9 5.7] [4 0] [0 2] [5.12 5.32] [0.62 0.10] [0.10 0.98] [0.58 0.15] [0.15 0.73] Figure 8 [3 3] [1 0] [0 1] [6.0 5.5] [2 1.3] [1.3 3] [6.3 5.0] [1 0] [0 9] [4.79 3.65] [9.50 3.94] [3.94 5.50] [0.38 0.07] [0.07 0.65] Figure 9 [5 5] [1 0] [0 1] [7 5] [10] [0 1] [5 7] [10] [0 1] [5.67 5.67] [5.65 -1.12] [-1.12 3.97] [0.33 0] [0 0.33] Table 5.3: Fusion of Three Two-Dimensional Redundant Measurements - Data. Example One, shown in Figure 5.6, consists of three closely spaced measurements of equal variance. The means of Measurement Two and Three lay outside of each other's uncertainty ellipsoids indicating that the measurements "disagree". Measurement One "agrees" with both Measurements Two and Three. This spacing of the measurements results in an increase in uncertainty along one dimension of the HGRF result and a decrease along the other dimension. The principal axes of the HGRF result measure 1.17 and 0.69 compared with the measurement's principle axes that all measure 1.0. 75 The second example, displayed in Figure 5.7, consists of three measurements that are all in "agreement". Therefore, there is a decrease in uncertainty along all dimensions of the HGRF result, in comparison to the measurement's uncertainties. This example displays the potential benefits of redundant fusion when measurements are in "agreement". The third example, shown in Figure 5.8, includes two measurements that "agree" and a third measurement that does not overlap with either of the other measurement's uncertainty ellipsoids. This example indicates that the disagreeing sensor has likely failed, however, it may be that the two sensors that agree are in fact in error. Therefore, the HGRF method produces a highly uncertain result in comparison to the measurement's uncertainties. This increase in uncertainty can trigger a higher-level process to determine and correct the sensor failure. The fourth example consists of three measurements of equal variance that do not overlap each other. This can be viewed in Figure 5.9. This is a case of sensor failure that is not easily diagnosed from the data given. The HGRF method outputs a result that is highly uncertain which essentially encompasses the measurement's uncertainty ellipsoids. The principal axes of the HGRF result measure 3.45 and 1.76 in comparison to the measurement's uncertainty ellipsoid principal axes measuring 1.0 each. 5.4 Summary of Simulation Results The simulation results presented in this chapter reveal that the HGRF method satisfies the Heuristic," Specification of Section 3.1.3 with one exception. The failure to meet the Specification occurs when the measurements to be fused have differing uncertainties. In this case the fused ellipsoid overlaps the more certain measurement and does not cover all of the more uncertain result. This limitation is a result of using a Gaussian distribution to represent uncertainty. The proposed fusion method produces consistent results regardless of which frame of reference is used to express the measurements. The measurements to be fused must be expressed in the same frame of reference. This can be accomplished by using similarity transformations. 76 C h a p t e r 6 6 Application Example: A Two-Degree-of-Freedom Robot 6.1 Experimental Setup An experimental setup was developed with the purpose of testing the ELD Architecture and the proposed redundant fusion method. The setup consists of a two-degree-of-freedom robot whose end-effector moves in a plane. The setup is referred to as the XY Table. A picture of the XY Table is included in Figure 6.1 and a top view diagram is given in Figure 6.2. The setup was designed and built in the Industrial Automation Laboratory. 6.1.1 Physical System A DC motor, shown in Figure 6.3, drives the X-Axis of the robot. The Y-Axis translates along the X-Axis on two linear bearings and is connected to the motor by a ball screw. A stepper motor drives the Y-Axis. The end-effector translates along the Y-Axis on two small linear bearings and is connected to the stepper motor via a belt, as is shown in Figure 6.4. Due to the quality of the ball screw and bearings used there is a significant amount of play in the system. This limits the positional accuracy of the robot but does not hinder the experiments related to this work. 77 Figure 6.1: XY Table Experimental Setup. Top View Figure 6.2: Top View Diagram of the XY Table. 78 Figure 6.3: X-Axis Drive and Encoder. Figure 6.4: Y-Axis Drive and Ultrasound Sensor. 79 The two-dimensional position of the end-effector is sensed redundantly. An encoder is mounted to the shaft of the X-Axis motor and is used to measure the position of the X-Axis of the robot. The encoder is shown in Figure 6.3 and is a 1000 line encoder with quadrature decoding. This yields a resolution of 0.09 degrees allowing a positional measurement resolution of 0.003175 mm along the X-Axis . The measurement of the actual X position of the end-effector using the encoder is considerably less accurate than the resolution available due to the play in the system. The quantification of the accuracy of this measurement is discussed in detail in a following section. An ultrasound sensor is mounted as shown in Figure 6.4 and is used to determine the position of the Y-Axis . The sensor emits an ultrasound pulse, which reflects off of a metal surface and is received again by the sensor. The time of flight is used to determine the distance to the metal surface. The ultrasound sensor is an analog sensor and is prone to noise. The quantification of the accuracy of this sensor is also discussed in detail in a following section. Together the encoder and the ultrasound sensor indirectly sense the position of the end-effector. The measurements are indirect since the encoder is actually measuring the number of turns the ball screw makes and the ultrasound sensor is measuring the position of a metal surface rather than measuring the actual position of the end-effector. Both of these sensors are fast and are suitable for real-time control. A JVC 1070U colour digital camera is mounted above the robot and provides a top view of the entire workspace. The digital camera is connected to an Imaging Technologies IC2 PCI frame grabber mounted in a Pentium II 750 M H z PC. This hardware is able to grab a digital image with 640x480 pixels in approximately 30 ms. The digital image is processed in order to detect the position of the end-effector. The sequence of grabbing the image, transferring it to the PC's R A M and processing the data requires approximately 200 ms, making this sensor too slow for real-time control of the X Y Table. This sensor directly senses the position of the end-effector but is dependent on the lighting conditions. A lighting system is installed, as can be seen in Figure 6.1, which minimizes the variability in the lighting conditions. The details of the design of this sensor and the determination of its accuracy is discussed in a following section. 6.1.2 Implementation Platforms The X Y Table E L D Architecture is implemented across two hardware platforms. The high-level components that do not have real-time constraints are implemented using Visual C++ code running under the Microsoft NT 4.0 operating system and executing on a Personal Computer (PC). This PC will be referred to as the high-level PC. The low-level real-time components are implemented in the Open Real-Time Operating System (ORTS) [4], which is executed on a TI C32 Digital Signal Processor (DSP). The components implemented on this platform are programmed using ORTS scripts and C functions that are written on a PC and downloaded to the DSP. 80 The hardware setup of the X Y Table is shown below in Figure 6.5. A PC, referred to as the low-level PC, houses the DSP and the MFIO card, which is both an analog and digital Input/Output card. The sensors and actuators connect to the MFIO card. ORTS, running on the DSP, can directly access sensor and actuator data through a direct link to the MFIO card. Through an ORTS feature the DSP is able to communicate with the NT operating system on the low-level PC. The high-level PC houses and communicates with the frame grabber, which receives the camera signal. The NT platform, on the high-level PC, and the DSP platform, on the low-level PC, reliably communicate across an NT network by using NT pipes. -----m i i n Encoder Signal Ultrasound Signal T X-Axis Motor Signal T Y-Axis Motor Signal Low-Level PC Running NT MFIO U».| DSP DSP to PC NT Pipe High-Level PC Running NT PC to DSP NT Pipe Camera Signal Frame Grabber Figure 6.5: X Y Table Hardware Setup. 6.1.3 ELD Architecture Design The sensors and actuators of this system have been integrated using the E L D Architecture. The design of the architecture can be seen in Figure 6.6. At the lowest level of the architecture there are three ELDs: the Encoder, Ultrasound and Image ELDs. These are simple ELDs that interface the physical sensors with the software E L D Architecture. 81 'Planner ELD High Level Platform mplementation (NT) *Split Platform Implementation (NT and DSP) Pointer Locator ELD Encoder X Axis DC Motor Y Axis Stepper Motor Ultrasound Sensor Camera Figure 6.6: X Y Table ELD Architecture. 6.1.3.1 Low Level ELDs The Encoder E L D uses the encoder's signal to determine the X-Axis position. This E L D has its real-time components implemented under ORTS on the DSP and its high-level components implemented in Visual C++ under NT. This internal platform split is joined using an NT pipe communication channel. The Encoder E L D is connected to the X-Axis ELD. 82 The Ultrasound ELD receives the ultrasound sensor's signal and from it determines the Y-Axis position. The implementation of the Ultrasound ELD is similar to the implementation of the Encoder ELD. The Ultrasound ELD provides input to the Y-Axis ELD. The Image ELD triggers the frame grabber to acquire an image and places that image in PC RAM. This ELD is entirely implemented in Visual C++ on the high-level PC. The Image ELD provides input in the form of a digital image to the Pointer Locator ELD. An example of the type of images taken by the Image ELD is shown in Figure 6.7. W End-Effector Dot Calibration Dots # • • 50 mm Figure 6.7: Overhead Image Grabbed by the Image ELD. 6.1.3.2 Pointer Locator ELD The Pointer Locator ELD resides at the next level of the architecture. This ELD receives the Image ELD input processes it and outputs the two-dimensional position of the end-effector. The X-Axis and Y-Axis ELDs receive the output. Processing of the image involves a calibration and an end-effector detection routine. 83 Pixels in the image do not linearly map to real world coordinates. Therefore, a mapping is required to transform pixel coordinates to real world coordinates. The mapping is defined by an automated routine where a grid of red calibration dots of known real world positions is detected. The entire grid of dots is not viewable at the same time since the Y-Axis of the robot occludes part of the grid, as is seen in Figure 6.7. Therefore, the Pointer Locator E L D must request knowledge about the current position of the X -Axis and request motion from the X-Axis E L D if it is required. The calibration dot pixels are selected using colour thresholds and the dot centers are determined by using a center of area calculation. This calibration routine is run during the initialization of the system. A bilinear interpolation algorithm uses this information to determine the real world position of pixels. The end-effector is marked with a blue dot. This dot is detected in the image using a method similar to the calibration dot detection method. However, this dot is not in the plane of the grid of the calibration dots. Therefore, a geometric adjustment algorithm is required to compensate for the difference in heights and correctly determine the position of the end-effector in real world coordinates. 6.1.3.3 X-Axis and Y-Axis ELDs The next level in the X Y Table E L D Architecture consists of the X-Axis and Y-Axis ELDs, which are similar in design. The X-Axis E L D receives input from the Encoder E L D and the Pointer Locator E L D and outputs a controlling signal to the X-Axis motor. Similarly, the Y-Axis E L D receives input from the Ultrasound E L D and the Pointer Locator E L D and outputs a controlling signal to the Y-Axis motor. Both of these ELDs have all the components required to be a stand alone feedback control loop. The X and Y -Axis ELDs both provide input to the Planner E L D . As for the Encoder and Ultrasound ELDs, the X and Y-Axis ELDs have their real-time components implemented on the DSP platform and their high-level components on the NT platform. The X and Y-Axis ELDs both receive and fuse redundant sensor input. The Pointer Locator E L D provides a two-dimensional position measurement, which is first projected onto the appropriate axis using the projection method described in section 3.3. Following the projection, the two one-dimensional measurements are redundantly fused using the method described in Chapter 3. This fused value is used as a position feedback signal in the control of the respective robot axis. The X and Y-Axis control loops run on the DSP at 200 Hz while the Pointer Locator E L D is only able to provide output at a rate of 5 Hz. The use of fusion with multi-rate feedback signals in realtime motion control has been examined in the Industrial Automation Laboratory by Langlois [20]. 6.1.3.4 Planner ELD The Planner E L D is the object that coordinates the overall action of the X Y Table. It is connected to the X and Y-Axis ELDs and provides a controlling signal in the form of set points to those ELDs. This 84 ELD contains the programs for the high-level tasks for the X Y Table. Since the focus of this work is on the fusion of redundant data within the ELD Architecture no high-level tasks have been implemented beyond simple set point changes. An example of a suitable high-level task is recognizing an object in the workspace and aligning the end-effector with the object's position. This would require additional ELDs to be added to the system. 6.1.4 Graphical User Interface The GUI developed for the XY Table is implemented in Visual C++ and executes on the high-level PC. A screen shot of the GUI is given in Figure 6.8. NT pipes, established in the ELD Architecture base class, are used to communicate between the GUI and the ELD Architecture. As part of the base GUI design, an edit box is included at the bottom of the window where messages from the ELDs can be easily displayed. Various functions specific to the X Y Table application have been added to the GUI. The GUI can be used to display the image as seen by the Image ELD. The GUI also provides a tool to quickly modify and test the threshold values used in the Pointer Locator ELD. This allows quick testing of the Pointer Locator ELD functions. Among other features, an edit box has also been added that allows the user to input a desired set point of the X-Axis. Blip Cat Dots ThtesticWs List | Cassation Dot T| EE Dot IGieenDol Thteshold a |o~ T e s t X L m t l T o s t Y L m U l System Messages [10027393S2s949rc Image Displayed) (1002739362s597ms. JVC mage isplayedlH002739041 s376m< Image OrcpWed: |1O02739041s45ms. JVC mage displayed! ~3 J Figure 6.8: XY Table Graphical User Interface. 85 6.2 Calibration of Sensors The uncertainty of the output of the Encoder ELD, Ultrasound ELD and Pointer Locator ELD was determined through experimentation. A world reference frame was established using right angled machinist's blocks as is depicted in Figure 6.9. This reference frame was used to compare all measurements taken by the various XY Table sensors. The world and sensor reference frames all have parallel axes but different origins. Therefore, the offset between the world and sensor reference frames had to be determined. This was accomplished by repeated measurements using verniers, a dial guage and a height guage. The measurement data is given in Appendix D. Once the relationships between the reference frames were established the uncertainty of the end-effector position sensors could be established. Figure 6.9: Sensor Calibration Setup. The uncertainty was determined through experimentation. The position of the end-effector was measured at various positions throughout the workspace using a dial guage mounted to the right angled blocks, as is shown in Figure 6.9. This measurement was used as the true value. Corresponding measurements were taken with the Encoder ELD, Ultrasound ELD and Pointer Locator ELD. Again this data may be viewed in Appendix D. The uncertainties of the sensors used in the XY Table were determined from the variance of these repeated measurements. The results are reported in Table 6.1. 86 Sensor X-Axis X-Axis Sensor Y-Axis Y-Axis Sensor Variance (mm) Noise (mm) Variance (mm) Noise (mm) Encoder 3.4587 1.0X10"ZU N/A N/A Ultrasound N/A N/A 3.7039 0.014 Pointer Locator 4.1990 0.014 3.0350 0.0072 Table 6.1: Uncertainties of XY Table Sensor Measurements. The Encoder, Ultrasound and Pointer Locator E L D sensors exhibit much lower levels of static measurement noise than the uncertainty levels reported above. The levels of static measurement noise, given as variance measured in mm, for the Encoder, Ultrasound and Pointer Locator ELDs are included in Table 6.1. The high level of measurement uncertainty results from the indirect method that the encoder and ultrasound use to measure the position of the end-effector. This increased uncertainty can be attributed to the play in the system. The increased level of measurement uncertainty for the Pointer Locator E L D results from inaccuracies in detecting the end-effector and calibration dots. 6.3 Redundant Fusion Results Data was collected during the execution of two different trajectories with the X Y Table. The results of the fusion of this data is presented and discussed in this section. The X and Y-Axis ELDs used the multi-rate feedback control strategy proposed by Langlois in [20] to control the X Y Table's motion. The Encoder and Ultrasound ELD's sensor measurements were first extended into a two-dimensional ellipsoid representation. This was done using the method presented in Section 3.3. The encoder axis is perpendicular to the ultrasound axis making the extension operation simple in this case. The Pointer Locator E L D provides a measurement of the position of the end-effector that is redundant to the extended Encoder-Ultrasound measurement. These redundant measurements are then fused using the HGRF method. The fusion results presented and discussed below were calculated offline. 6.3.1 Figure Eight Trajectory The first trajectory, referred to as the figure eight, is shown in Figure 6.10. The figure displays the Encoder-Ultrasound, Pointer Locator and HGRF result for one cycle of the trajectory. Two enlarged views, shown in Figure 6.11 and Figure 6.12, are discussed in detail. 87 250 160 180 200 220 240 260 280 300 320 X-Axis (mm) Figure 6.10: Figure Eight Trajectory Fusion Results. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements and dotted line and diamonds - HGRF fused measurements. 3451 340 330 325 L 180 185 190 X-Axis (mm) 195 200 Figure 6.11: Enlarged View #1. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements, and dotted line and diamonds - HGRF fused measurements. 88 290 285 E E m 280 275 2701 > • —1 ' 200 205 210 215 220 225 X-Axis (mm) Figure 6.12: Enlarged View #2. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements, and dotted line and diamonds - HGRF fused measurements. In general throughout the figure eight trajectory the Encoder-Ultrasound and Pointer Locator measurements "agree". Therefore, the fusion of the redundant measurements generally results in a decrease in uncertainty. An example of such a case is shown in Figure 6.12. All three sets of measurements in this figure clearly "agree" and the HGRF result lies inside both of the redundant measurements. However, there are instances where the sensor measurements "disagree" as shown in Figure 6.11. The measurements in this figure differ by 2.5 to 3.0 mm along the X-axis. This results in an increase in uncertainty along this direction. The HGRF result is seen to span the range of the measurement's uncertainty ellipsoids in this case. 6.3.2 Square Trajectory The second set of results discussed is the square trajectory, shown in Figure 6.13. Two enlarged views are included in Figure 6.14 and Figure 6.15, and are discussed in detail. 89 X-Axis (mm) Figure 6.13: Square Trajectory Fusion Results. Solid line and stars - Pointer Locator measurements, dashed line and circles - Encoder-Ultrasound measurements, and dotted line and diamonds - HGRF fused measurements. 2551-250 E E, w 245 240 1 1 1 J 1 205 210 215 220 225 230 X-Axis (mm) Figure 6.14: Enlarged View #3. Solid line and crosses - Pointer Locator measurements, dashed line and stars - Encoder-Ultrasound measurements, and dotted line and stars - HGRF fused measurements. 90 345 340 h 325 I 1 L 1 :_ , , 185 190 195 200 205 210 X-Axis (mm) Figure 6.15: Enlarged View #4. Solid line and crosses - Pointer Locator measurements, dashed line and stars - Encoder-Ultrasound measurements, and dotted line and stars - HGRF fused measurements. As was the case with the figure eight trajectory, in general throughout the square trajectory the Encoder-Ultrasound and Pointer Locator measurements agree. Therefore, the fusion of the redundant measurements generally results in a decrease in uncertainty. An example of such a case is shown in Figure 6.15. All three sets of measurements in this figure clearly "agree" and the HGRF result lies inside both of the redundant measurements. However, there are instances where the sensor measurements "disagree" as shown in Figure 6.14. The measurements in this figure differ by 2.0 to 5.0 mm. This results in an increase in uncertainty. The HGRF result is seen to span the range of the measurement's uncertainty ellipsoids in this case. 6.4 Summary The ELD Architecture has been shown to be effective for integrating multiple sensors and actuators in an experimental automation workcell. The XY Table implementation has also exhibited the Architecture's ability to operate on multiple platforms. Further the ELD Architecture has been successfully used to bridge high-level and low-level real-time tasks. 91 The HGRF method has been demonstrated as effective at combining redundant measurements in a real system. The uncertainty of HGRF result is decreased when the redundant measurements agree and is increased when there is a discrepancy. 92 Chapter 7 7 Concluding Remarks 7.1 Summary and Conclusion In this work the Heuristic based Geometric Redundant Fusion method has been presented as a method that reliably fuses redundant sensor measurements over all ranges of measurement disparity. This method is developed with an emphasis on the fusion of sensor data provided by a small number of sensors, as is encountered in industrial automation workcells. Specific cases of complementary fusion including dimensional extension, uncertainty modification and range enhancement were discussed in this work as well. The projection function was introduced as a tool that manipulates the dimensionality of uncertainty ellipsoids by projecting the ellipsoid into a lower dimension to allow high dimensional data to be used in a lower dimensional context. Further, the ELD Architecture has been presented as a framework that facilitates the efficient design, implementation and maintenance, as well as the reliable operation of automation workcells. The architecture is useful for the systematic integration of multiple sensors and actuators and provides a framework in which the developed fusion mechanisms can be applied. The research objectives considered, and obtained to some degree, in this work include: 1. To make a contribution towards the development of automation systems that are efficiently implemented, maintainable and reliably operating. 2. To specify and implement a modular architecture that allows the intelligent integration of sensors and actuators in an industrial automation workcell environment. 3. To develop tools for fusing redundant uncertain sensor data. 93 4. To develop tools for fusing complementary uncertain sensor data. The HGRF method is an extension of the GRF method [31] and is capable of fusing m redundant measurements in ^-dimensional space. The uncertainty ellipsoid representation, based on the Guassiah distribution, is utilized by this method. This limits the application of this method to linear measurement spaces and to situations where a linear approximation to a non-linear measurement space is acceptable. The uncertainty of the HGRF result increases as the level of disparity of the measurements increases. This ensures a realistic estimate of the uncertainty of the data even when unexpected sensor errors occur and when a small amount of data is available, as is common with automation workcells. The dimensional extension function is useful for combining measurements taken of the same feature in non-overlapping measurement spaces. The uncertainty modification mechanism is applicable when a sensor's measurement can be used to modify the uncertainty of another's measurement. The range enhancement mechanism is useful for combining different measurements taken in overlapping measurement spaces. Finally, the projection function is a method that projects an ellipsoid into a lower dimension. The E L D Architecture allows systematic and efficient implementation of automation workcells across multiple hardware and software platforms. The E L D Architecture is based upon the LS and includes sensor and actuator functionality. Within the E L D the uncertainty of all sensor data is quantified and manipulated using the sensor fusion mechanisms detailed in this work. Consideration of the uncertainty of sensor data will enhance the operational reliability of industrial automation workcells. In conclusion, the contributions that this work has made in the area of sensor fusion includes: 1. The development of the HGRF method as a method that reliably fuses redundant sensor data over all ranges of measurement disparity. 2. The specification of an architecture based complementary fusion approach that utilizes dimensional extension, uncertainty modification and range enhancement mechanisms. Additionally, the contributions that this work has made towards LS based architectures through the E L D specification includes: 1. The inclusion of actuation within the E L D in a real-time control context. 2. Specifying that all data within the E L D architecture be represented as uncertain. 3. The integration of redundant and complementary fusion mechanisms within the E L D . 7.2 Recommendations This work has provided a reliable redundant fusion mechanism in the HGRF method; however, the larger and more diverse area of complementary fusion requires further attention. Much of the work in this area has been application specific and a sufficiently general framework for complementary fusion would 94 benefit both further research and industrial implementation. The discussions in this work regarding dimensional extension, uncertainty modification and range enhancement is a beginning of this framework, however, a much more thorough and rigorous approach is required. Investigation into implementing the fusion mechanisms presented in this work using fuzzy logic is recommended. A fuzzy logic implementation may be more industrially acceptable than the geometric uncertainty ellipsoid based methods presented herein. Ideally, tools based on both representations could be used interchangeably within the ELD Architecture. This would require developing methods to interface the uncertainty representations. This interface may be useful as well for future complementary fusion mechanisms that may be best suited to a fuzzy implementation. The E L D Architecture, as implemented in this work, requires many functional extensions to make it an industrially useable product. Firstly, a detailed design is required for the Planner and Knowledge Base to allow systematic implementation of high-level intelligence within the E L D . Secondly, a detailed design of the Actuator component is required to make this a useful industrial tool for low-level control. This would involve implementing controllers, tuning tools and automated controller selection mechanisms. Additionally, the E L D Architecture would have to be implemented on platforms suitable for real-time control. Further refinement of the Architecture Builder Tool will increase the industrial applicability of the E L D Architecture as well. Specifically, the implementation of a library of ELDs would increase the efficiency of implementing an E L D application. Additionally, further development of the E L D library interface in the Builder Tool to include on the fly E L D configuration would be useful. For example, an Image E L D could be developed for the library that is configurable to interface with a number of frame grabbers and cameras. This E L D would also have predefined and configurable run-time GUI functionality such as, displaying the camera image on the run-time GUI. Finally, the functionality of the Builder Tool should be extended for automated multiple platform implementation. The eventual fully functional Builder Tool would not require the developer to program any components of the E L D Architecture outside of the Builder Tool environment. Every component of the Architecture would be selectable from a list of predefined functions or able to be custom programmed within the Builder Tool environment. 95 References [1] M . A b i d i and R. Gonza lez , eds., Data Fusion in Robotics and Machine Intelligence, Boston , A cadem i c Press, 1992. [2] J . A l bu s , ' R C S : A Reference M o d e l Arch i tecture for Intel l igent Con t ro l , ' in Computer, V o l . 25, Issue 5, pp. 56-59, I E E E , 1992. :, [3] J . A l bu s , H . M c C a i n , and R. L u m i a , ' N A S A / N B S Standard Reference M o d e l for Telerobot Con t ro l System Archi tecture (Nasrem), ' Tech . Report 1,235, N a t T Inst. Standards and Techno logy, Gaithersburg, Md . , 1989. [4] Y . A l t in tas , and N . A . E ro l , 'Open Arch i tecture Modu l a r T oo l K i t for M o t i o n and M a c h i n i n g Process Cont ro l , ' in Annals ofCIRP, V o l . 47, N o . 1, 1998. [5] R. B rooks , ' A Robust Layered Contro l System for a M o b i l e Robot , ' in IEEE Journal of Robotics and Automation, V o l . R A - 2 , N o . 1, 1986. [6] R. B rooks , ' Intel l igence Wi thout Representat ion, ' in Artificial Intelligence, v o l . 47, pp. 139-159, 1991. [7] H . B ruyn i n ckx , and J. DeSchutter, ' The Geometry o f A c t i v e Sens ing, ' submitted to Automatica, Ju ly 20 t h , 1999. [8] J . Budenske and M . G i n i , 'Sensor Exp l i ca t ion: Know ledge-Based Robo t i c P l an Execu t i on through L og i c a l Objects, ' in IEEE Transactions on Systems, Man and Cybernetics, V o l . 27, Part B. N o . 4, pp. 611-625, 1997. 96 [9] D. Dubo i s , and H . Prade, 'Comb ina t i on o f F u z z y Informat ion i n the F ramework o f Poss ib i l i t y Theory, ' in Data Fusion in Robotics and Machine Intelligence ( M . A b i d i and R. Gonza lez , eds.), pp. 481-505, Bos ton , A cadem i c Press, 1992. [10] J . E l l io t t , Personal notes, 1995. [11] T . Henderson and E. Shi lcrat, ' L og i c a l Sensor Systems, ' in Journal of Robotic Systems, V o l . 1, N o . 2, pg l 69 -193 , 1984. [12] T . Henderson, C. Hansen and B. Bhanu , 'The Spec i f i cat ion o f D is t r ibuted Sens ing and Con t ro l , ' in Journal of Robotic Systems, V o l . 2, N o . 4, 387-396, 1985. [13] T . Henderson, P. A l l e n , I. Cox , A . M i t i che , H . Durrant-Whyte and W . Snyder, Eds. , 'Wo rkshop on Mu l t i sensor Integration in Manufac tu r ing Au t oma t i on ' Snowb i rd , U tah , February 4-7, 1987. [14] G . Hepp le r and G . D 'E leu t r i o , Newton's Second Law and All That: A Classical Treatment of Mechanics, 1997. [15] I E E E , PI 451.1 Draft Standard for a Smart Transducer Interface for Sensors and Actuators - Network Capable Application Processor (NCAP) Information Model, D l .83 ed., Dec . 1996. [16] I E E E , P1451.2 Draft Standard for a Smart Transducer Interface for Sensors and Actuators - Transducer to Microprocessor Communication Protocols and Transducer Electronic Data Sheet (TEDS) Formats, D3.05 ed., A u g . 1997. [17] S. Iyengar, L. Prasad and H. M i n , Advances in Distributed Sensor Technology, Uppe r Saddle R ive r , N J , Prentice Ha l l , 1995. [18] P. K u m p e l and J. Thorpe, Linear Algebra, Saunders Co l l ege Pub l i sh ing , Toronto, 1983. 97 [19] F. L a m , Architecture Builder User's Manual, Industrial Au tomat i on Laboratory, 2001 . [20] D. Lang lo i s , and E.A. Crof t , ' L ow- l eve l Da ta Fus ion F o r Improved Sys tem Feedback, ' i n Proceedings of the IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems, A u g . 2001. [21] Ch i a -Hoang Lee , ' A Compar i son o f T w o Ev ident ia l Reason ing Schemes, ' in Artificial Intelligence, vo l . 35, pp. 127-134, 1988. [22] Sukhan Lee and S. R o , 'Uncerta inty Se l f -Management w i th Percept ion Ne t Based Geomet r i c Data Fus ion , ' in Proceedings of the 1997 IEEE International Conference on Robotics and Automation, pp. 2075-2081, A p r i l 1997. [23] T. Lee , J. R ichards, and P. Swa in , 'Probabi l i s t i c and Ev ident i a l Approaches for Mu l t i sou rce Da ta Ana l y s i s , ' in IEEE Transaction on Geoscience and Remote Sensing, v o l . ge-25,no . 3, M a y 1987. [24] R. L u o and M . L i n , 'Mu l t i -Sensor Integrated Intel l igent Robot for Au tomated As semb l y , ' i n Proceedings o f the Workshop on Spat ia l Reason ing and Mu l t i -Senso r Fus ion , St. Char les, I L , pp.351-360, Oct. 1987. [25] R. L u o and M . K a y , 'Mu l t i sensor Integration and Fus i on i n Intel l igent Systems, ' i n IEEE Transactions on Systems, Man and Cybernetics, V o l . 19, N o . 5, pp. 901-931, I E E E , 1989. [26] A . Maha jan , K. Wang , and P. Ray , 'Mu l t i sensor Integration and Fus i on M o d e l that Uses a F u z z y Inference System, ' mlEEE/ASME Transactions on Mechatronics, v o l . 6, no. 2, pp. 188-196, June 2001. 98 [27] K. Ma r zu l l o , 'To lerat ing Fai lures o f Cont inuous-Va lued Sensors, ' i n ACM Transactions on Computer Systems, vo l . 4, no. 4, pp. 284-304, 1990. [28] R. M u r p h y and R. A r k i n , ' S F X : A n Archi tecture for Ac t i on-Or ien ted Sensor Fus i on , ' in Proceedings of the 1992 IEEE/RSJ International Conference on Intelligent Robots and Systems, (Ra le igh, N C ) , pp. 1079-1086, I E E E , 1992. [29] R. Mu rphy , 'Dempster-Shafer Theory for Sensor Fus i on in Au tonomous M o b i l e Robots , ' in IEEE Transactions on Robotics and Automation, v o l . 14, no. 2, pp. 197-206, A p r i l 1998. [30] M . Na i s h and E. Crof t , ' E L S A : a mult isensor integration architecture for industr ia l grad ing tasks,' in Mechatronics, V o l . 10, N o . 1-2, pp. 19-51, 2000. [31] Y . Nakamu ra and Y . X u , 'Geometr i ca l Fus ion Me t hod for Mu l t i -Senso r Robo t i c Systems, ' in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 668-673, 1989. [32] M . Pachter and P. Chandler, 'Cha l lenges o f Au tonomous Con t ro l , ' i n IEEE Control Systems Magazine, V o l . 18, N o . 4, pp. 92-97, 1998. [33] J . Paun icka , B. Mende l , and D. Co rman , 'The O C P - A n Open M i dd l ewa re So lu t i on for Embedded Systems, ' i n Proceedings of the American Control Conference, Ar l i ng ton , V A , pp. 3445-3450 June 25-27, 2001. [34] J . Pearson, J. Ge l fand, W . Su l l i van , R. Peterson, and D. Spence, 'Neu ra l Ne two rk App roach to Sensory Fus ion , ' in Proceedings of the SPIE Sensor Fusion, v o l . 931 , C . Weaver , Ed. , pp. 103-108, A p r i l 1988. [35] F. Proctor, and J. A l bus , 'Open Arch i tecture Contro l lers , ' i n IEEE Spectrum, pp. 60-64, June 1997. 99 [36] J . R i chardson, and K. Marsh , ' Fus i on o f Mu l t i sensor Data , ' in The International Journal of Robotics Research, vo l . 7, no. 6, pp. 78-96, December 1988. [37] A . R i e l , Object Oriented Design Heuristics, A d d i s o n Wes l ey L ongman , A p r i l 1996. [38] H . Stephanou, and A . Sage, 'Perspect ive on Imperfect Informat ion Process ing, ' i n IEEE Transactions on Systems, Man, and Cybernetics, vo l . smc-17, no. 5, September/October 1987. [39] L. Tsoukalas, and R. Uh r i g , Fuzzy and Neural Approaches in Engineering, Toronto, W i l e y , 1997. [40] G . We l l e r , F. G roen and L. Hertzberger, ' A Sensor Process ing M o d e l Incorporat ing E r ro r Detect ion and Recovery , ' in NATO ASI Series, Vol. F 63, Traditional and Non-Traditional Robotic Sensors, pp. 351-363, 1990. [41] World Robotics 2000, Un i ted Nat ions and the International Federat ion o f Robot i cs , Geneva (Switzer land), 2000. [42] R. Yager , and A . Ke lman , ' Fus i on o f F u z z y Informat ion W i t h Cons iderat ions for Compat ib i l i t y , Part ia l Aggregat ion, and Reinforcement, ' in International Journal of Approximate Reasoning, v o l . 15, pp. 93-122, 1996. [43] Y . F. Zheng , 'Integration o f Mu l t i p l e Sensors into a Robot i c Sys tem and it Per formance Eva lua t ion , ' in IEEE Transactions on Robotics and Automation, v o l . 5, no. 5, October 1989. 100 Appendix A A ELD Architecture Classes Described herein is the E L D Architecture class structure. Also included is a description of the E L D classes and contained functions and data. The E L D Architecture has been implemented in the Microsoft Visual C++ language using Microsoft Developer Studio. A.1 ELD Architecture Class Design Illustrated in Figure A . l is the E L D Architecture class design. At the top is application class, CELDApp, which is generated by Microsoft Developer Studio. This class enables the application to run under the Microsoft Windows environment. Contained in the CELDApp class is a dialog class, CELDDlg , object. This class governs the application window functions and properties. It is within this class that the run-time GUI is programmed. ELDs can be accessed through the run-time GUI. Again Microsoft Developer Studio automatically implements the base functionality of the C E L D D l g class. Contained within the CELDDlg class is a CArchitecture object. The CArchitecture class contains all E L D Architecture functionality and data. This includes ELDs and the connections between them. The C E L D class is the base E L D class, which contains all common E L D functionality and data. Specifically implemented ELDs are derived from this C E L D class and are contained in the CArchitecture class. The specific E L D classes, CELDn, contain functionality and data specific to that E L D . 101 CELD1 C E L D A p p CELDDIg CArchitecture CELD2 | Has a relationship ^ Is a relationship C E L D C E L D n Figure A . l : ELD Architecture Class Diagram. A.2 Class CArchitecture Description Below, descriptions of the CArchitecture class functions and data is given. Public Members: Construction and Destruction CArchitecture() Constructs a CArchitecture object, virtual -CArchitectureQ Destructs a CArchitecture object. Specific ELD Objects CELD1 m E L D l CELD1 m E L D l CELDn m ELDn Pipe Handles for Communication to the Run-Time GUI H A N D L E m_pipeHandleUIServer; H A N D L E m_pipeHandleUIClient; Pipe Handles for Communication to the DSP H A N D L E m_pipeHandlePCToDSP; H A N D L E m_pipeHandleDSPToPC; 102 Private members: DSP Pipe Functions. bool InitializeDSPPipes(); Initializes connections to DSP pipes. bool InitializeUIPipeQ; Initializes connections to run-time GUI pipes. Architecure Building Functions bool CreateConnection(ELDs outputFrom, ELDs outputTo); outputFrom Outputting E L D name. outputTo Receiving E L D name. This function creates a connection between two ELDs. A.3 Class CELD Description Below, descriptions of the C E L D class functions and data is given. Public Members: Construction and Destruction CELD() virtual ~CELD() Constructs a C E L D object. Destructs a C E L D object. Functions for Data Access ELDs GetELDName() H A N D L E GetPipeHandlePCToDSP() H A N D L E GetPipeHandleDSPToPC() H A N D L E GetPipeHandleUI() int GetNumOutputs() int GetNumInputs() SConnection GetOutputConnections(int index) SConnection GetInputConnections(int index) Returns the ELDs name. Returns the PC to DSP pipe handle. Returns the DSP to PC pipe handle. Returns the run-time GUI pipe handle. Returns the number of Outputs the E L D has. Returns the number of Inputs the E L D has. Returns the index'1' output connection. Returns the index'1' input connection. Input/output Constructing Functions bool SetInputConnection(ELDs InputELDName, CString &pipeName) InputELDName Name of the inputting E L D . PipeName Name of the pipe connecting the two ELDs. This function configures an input connection. 103 bool SetOutputConnection(ELDs OutputELDName, CString pipeName) OutputELDName Name of the outputting E L D . PipeName Name of the pipe connecting the two ELDs. This function configures an output connection. Initialization Functions bool InitializetELDs ELDName, CString description, H A N D L E PCToDSP, H A N D L E DSPToPC, H A N D L E UI) ELDName The name of the E L D . Description A description of the ELD' s function. PCToDSP Handle of the PC to DSP pipe. DSPToPC Handle of the DSP to PC pipe. UI Handle of the run-time GUI pipe. This function initializes an ELD's data. This is called by the CArchitecture constructor. bool InitializeHWndtHWND hWnd) Initialize the run-time GUI window handle. This is called by CELDDlg . Functions to be Defined in the Derived E L D classes virtual bool InitializeSensor() Specific E L D sensor initialization, virtual bool InitializeActuator() Specific E L D actuator initialization virtual bool BeginThread() Begins the E L D thread, virtual bool InterpretMessage(SComm data) Interprets a message received through a pipe. Run-time GUI Display Functions void DisplayTextfCString text) Displays information on the Run-time GUI void Display TextfCString text, double value) void Display TextfCString text, double value, CString text2, double value2) Functions for Sending Messages Via a Pipe bool PipeMessage(ELDs recipient, ELDMessages message, HandShakingModes handshaking, float p i , float p2, . . . float pn, long dataAddress) recipient E L D to send message to. Message Message to send. 104 Handshaking Handshaking mode to use. Pn n , h Parameter. DataAddress A memory address containing a data structure. This function sends a message from one E L D to another. Handshaking Functions bool WaitForReply(SComm &comm, ELDs fromELD, ELDMessages desiredMessage, int timeOut) comm. The communication data. FromELD The E L D the data is from. DesiredMessage The message that is expected. Timeout How long to wait in ms. This function waits for a reply from an E L D if one has been requested. Protected Members: ELD Data ELDs m E L D N a m e CString mELDDescription H A N D L E m_pipeHandlePCToDSP H A N D L E m_pipeHandleDSPToPC H A N D L E m_pipeHandleUI int mnumberlnputs int mnumberOutputs SConnection* m_inputConnections[] SConnection* m_outputConnections[] H W N D m_hWnd Real-time NT/DSP Synchronization Data int m_syncFreq _timeb msyncTime int mcontinuousOutputFlag Sensor Functions bool SenseQ E L D Name. ELD function description. PC to DSP pipe handle. DSP to PC pipe handle. GUI pipe handle. Number of E L D inputs. Number of E L D outputs. Dynamically allocated array of connection data. Dynamically allocated array of connection data. GUI window handle. Synchronization frequency. Synchronization start time. On/Off flag for continuous output. The overall sensor function. 105 bool SendOutput(ELDs requestor, float desiredAge, float timeout) requestor The requesting ELD' s name. DesiredAge The desired age of the data. Timeout The amount of time given to get the data. This function activates the ELD's sensing capabilities and sends the data to the requestor. NT/DSP Synchronization and Continuous Output Functions bool SynchronizeO Synchronizes the NT and DSP platforms. bool StartContinuousInput(ELDs ELDInputToStart) ELDInputToStart The E L D to send continuous output. This function starts an input E L D sending continuous output. bool EndContinuousInput() Ends the continuous output. bool SendContinuousOutput(ELDs sender, float syncFreq, long syncTimeDataAddress); sender The name of the sending E L D . SyncFreq The frequency to send at. SyncTimeDataAddress The address of the starting sync time. This function sends continuous output when requested. bool ContinuousOutputReceived(UINT outputPtr, ELDs sender) outputPtr A pointer to the output data. Sender E L D name of the sender. Upon receipt of continuous output this function directs the action of the E L D . Data fusion functions bool Project2D(CEllipse ellipse, SNormal &normal, float projectionAngle) ellipse The two-dimensional ellipsoid. Normal The one-dimensional ellipsoid. ProjectionAngle The angle at which to project. This function projects a 2D ellipsoid into ID. (Not fully implemented in C++) bool Redundant lDFusion(SNormal measurement 1, SNormal measurement2, SNormal &fused) 106 measurement] The first one-dimensional ellipsoid. measurement! The second one-dimensional ellipsoid. fused The resulting one-dimensional ellipsoid. This function redundantly fuses to one-dimensional ellipsoids using the HGRF method (Not implemented in C++) bool Redundant2DFusion(CEllipse measurement 1, CEllipse measurement2, CEllipse &fused) measurement] The first two-dimensional ellipsoid. measurement! The second two-dimensional ellipsoid. fused The resulting two-dimensional ellipsoid. This function redundantly fuses to two-dimensional ellipsoids using the HGRF method (Not implemented in C++) Functions that need to be defined in the specific E L D class derived from this class. virtual bool WriteDatafUINT outputPtr, ELDs sender) outputPtr The address to the output data structure. Sender The name of the sending E L D . This function maps the data received to the ELD's input data structure. virtual void* GetOutput(_timeb &bestTime) bestTime Time that the data was taken. This function returns the output data and the time that it was taken. virtual bool Actuate(HandShakingModes handshaking, float setPoint) handshaking The handshaking mode requested. SetPoint The new set point. This function sends a setpoint change to an actuator. virtual bool Getlnput() Commands to receive input. virtual bool FuseData() Fuses the input data. virtual bool ProcessData() Processes the fused data. virtual bool Calibrate() Calibrates the sensor. virtual bool SendDSPOutput() Sends output to the DSP. virtual bool SendDSPContinuousOutput() Sends continuous output to the DSP. 107 A.4 Class Clmage Description This class is included as an example of a specific ELD implementation. It is derived from the C E L D class and therefore inherits all C E L D functionality and data. Functions defined as virtual in the C E L D class are redefined in the derived class. Public Members: Construction and Destruction CImage() virtual ~CImage() Constructs a C E L D object. Destructs a C E L D object. ELD Input/Output Data Structures SImageSensorlnput *m_sensorInputPtr SImageSensorOutput *m_sensorOutputPtr SImageRequests *m_requestPtr SQuery *m_queryPtr Functions Defined Virtually in CELD bool InitializeSensorfchar *configFileName) bool BeginThread() bool InterpretMessage(SComm data) Sensor input data variable. Sensor output data variable. Request input variable. Query input variable. Initialize camera configuration file name. Call this to start the E L D thread. Message interpreter. Private Members: Functions Defined Virtually in CELD bool WriteData(UINT outputPtr, ELDs sender) outputPtr The address to the output data structure. Sender The name of the sending E L D . This function maps the data received to the ELD' s input data structure. bool Getlnput() Gets the E L D sensor input, bool FuseData() Fuses the sensor input data, bool ProcessData() Processes the fused data, bool Calibrate() Calibrates the sensor, void *GetOutput(_timeb &bestTime) Gets the output data and time, bool SendDSPOutputQ Sends output to the DSP. 108 Data specific to this E L D char *m_cameraConfigFile The name of the camera configuration file. I T X I N T R I D m i n t r a c q The frame grabber interrupt ID. pMODCNF micpmod Frame grabber data. Functions specific to this E L D void DisplayImage(HWND hWnd, short XWinPos, short YWinPos, short XWinSize, short YWinSize) HWnd The GUI window handle. XwinPos The x coordinate to start the display at. YwinPos The y coordinate to start the display at. XwinSize The x size of the image to display. YWinSize The y size of the image to display. This function displays an image on the run-time GUI. A.5 Structure and Data Definitions A.5.1 ELD Messages and Names The E L D messages that can be sent between ELDs are defined as an enumerated type. A list of example messages follows. The E L D names are defined as an enumerated type as well. enum ELDMessages Commands SENSE = 0, C A L I B R A T E , THRESHOLD, SET_POINT, DISPLAY, S E N D O U T P U T , SEND_CONTINUOUS_OUTPUT, START_CONTINUOUS_INPUT, E N D C O N T I N U O U S J N P U T , S Y N C H R O N I Z E T I M E , START_FUSION, INITIALIZE, Command input to sense. Command input to calibrate. Command input to threshold. Send a setpoint change to an actuator. Command input to display sensor input. Command input to send output. Command input to send continuous output. Command to start sending continuous output. Command to end sending continuous output. Command synchronization to occur. Command input to start fusion. Command input to initialize itself. 109 STATUS Requests QUERY_MANIPULATOR_X_POSITION_GT, R E Q U E S T M A N I P U L A T O R X M O T I O N , Replies D O N E N O E R R O R , D O N E E R R O R , C O N T I N U O U S O U T P U T , NOT_DONE, OUTPUT, R E P L Y Q U E R Y , IDLE, Errors U N K N O W N _ E L D _ N A M E , U N K N O W N M E S S A G E , MES S A G E I G N O R E D T I M E O U T , A.5.2 Sensor Input/Output Structures Each E L D has an Input and Output Structure that E L D output structure is included below. struct SImageSensorOutput { B Y T E *imageRgbPtr; B Y T E *imageYcrcbPtr; WORD xSize; WORD ySize; I T X D E P T H pixelSize; float certaintyRed; float certaintyGreen; float certaintyBlue; timeb *imageTimePtr; }; Command information about input's status. Query if manipulator's position is greater than x. Request the manipulator to move in x. Reply that there was no error. Reply that there was an error. Reply that this is continuous output. Reply that command not completed. Reply that this is output data. Reply that this is the reply to a query. Reply that the E L D is idle. Error reply. Error reply. Error reply. Error reply. is defined individually. An example of the Cimage R G B format image data. YcrCb format image data. Horizontal size of image. Vertical size of image. Pixel data size. Uncertainty of the red channel. Uncertainty of the green channel. Uncertainty of the blue channel. Time that image was taken. 110 A.5.3 Communication Structure Messages in the E L D Architecture are passed through NT pipes and therefore must be packaged into structure. The structure that is used is given below. struct SComm { ELDs sender; ELDs recipient; ELDMessages messages; HandShakingModes handshaking; float p 1; float p2; float p3; float p4; float p5; float p6; long dataAddress; }; Sending E L D name. Receiving E L D name. Message sent. Handshaking mode requested. Data. Data. Data. Data. Data. Data. Input/Output data structure address. I l l Appendix B B Matlab Redundant Fusion Algorithms B.1 Redundant Fusion Functions The following redundant fusion functions have been written in Matlab. They accept input and provide output that is expressed in the same reference frame. There are three functions included below for three different measurement cases. Implementation of additional measurement cases requires modifying one of the following functions. The redundant fusion examples displayed in Chapter 5 can be generated by using the test suites included below. B.1.1 Two Measurement, One-Dimensional Redundant Fusion Function The following Matlab program was used to generate the simulation results shown in Chapter 5. % This f u c t i o n fuses two one-dimensional measurements based on Jason E l l i o t t ' s method % (copyright Sept 2001) % % Inputs: Meanl, V a r l , Mean2, Var2. % % Outputs: FusedMean, FusedVar :'• % • '. % The Inputs and Output are expressed i n the same frame of reference as determined by , % the inputs. function [FusedMean, FusedVar] = Fusion_2M_lD(Meanl, V a r l , Mean2, Var2) %******+************+*******+************+*****+***+********^ % Nakamura Fusion, VarNaka, and MeanNaka (the fused mean) %****************************************^ % From Nakamura's method, determine the fused c e r t a i n t y e l l i p s e assuming no separation 112 % of means. To determine the Q matrix the E l l i p s e s must be i n variance form (squared) '. % and expressed i n Frame w. QFus = [Varl 0; 0 Var2]; SumJQJt = inv(Varl) + inv(Var2); Wl = inv(SumJQJt)*inv(Varl); W2 = inv(SumJQJt)*inv(Var2) ; W = [Wl W2]; VarNaka_w = W*QFus*W; %Find the fused mean according to Nakamura FusedMean = Wl*Meanl + W2*Mean2; % Measurement Spacing Vectors, v i vl_w = inv(Varl^O.5)*(FusedMean - Meanl); v2_w = inv(Var2~0.5)*(FusedMean - Mean2); ^ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * % Determine the Separation Vector, K, from the v i ' s %************************************************************************************* % Search for the v i with the largest magnitude. K_w = abs(max(abs(vl_w),abs(v2_w))); o , * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * % Certainty Scaling Matrix, C %************************************************************************************* % Apply the c e r t a i n t y s c a l i n g function, fc(K), to the Separation Parameter, K. % The number of measurements being fused. m=2; % Apply the c e r t a i n t y s c a l i n g function, i f K_w < 1 C_w = 1.+(-11.+7.*sqrt(m))*K_w^2+(18.-7.*sqrt(m))*K_w*3+(-8.+2.*sqrt(ra))*K_wM; else C_w = sqrt(m)*(K_w +1); end .•' ,' %************************************************************************************* % Fused E l l i p s e Variance, SigmaFus_w (Scale SigmaNaka_w) % Apply the c e r t a i n t y s c a l i n g , C_s, to get the fused r e s u l t . FusedVar = C_w*VarNaka_w*(C_w)'; B.1.2 Two Measurement, Two-Dimensional Redundant Fusion Function The following Matlab program was used to generate the simulation results shown in Chapter 5. % This function fuses two two-dimensional measurements based on Jason E l l i o t t ' s method % (copyright Sept 2001) % % Inputs: Meanl = [Meanl_x, Meanl_y], % V a r l = [Sigmal_ll, Sigmal_12; Sigmal_21, Sigmal_22], % Mean2 = [Mean2_x, Mean2_y], % Var2 = [Sigma2_ll, Sigma2_12; Sigma2_21, Sigma2_22] % % Outputs: FusedMean = [FusedMean_x, FusedMean_y], % FusedVar = [FusedSigmall, FusedSigmal2; FusedSigma21, FusedSigma22] % NakaVar = [NakaSigmall, NakaSigmal2; NakaSigma21, NakaSigma22] % % The Inputs and Output are expressed i n the same frame of reference as determined by % the inputs. 113 function [FusedMean, FusedVar, NakaVar] = Fusion_2M_2D(Meanl, V a r l , Mean2, Var2) ^**************************************^ % Rotation Matrices, i_Rot_j (the r o t a t i o n from Frame j to Frame i ) [w_Rot_el, V a r l _ e l , notNeeded] = svd(Varl); [w_Rot_e2, Var2_e2, notNeeded] = svd(Var2); %********+****************+*************^ % Nakamura Fusion, VarNaka, and MeanNaka (the fused mean) % From Nakamura's method, determine the fused c e r t a i n t y e l l i p s e assuming no separation % of means. To determine the Q matrix'the E l l i p s e s must be i n variance form (squared) % and expressed i n Frame w. QFus = [Varl zeros(2,2); zeros(2,2) Var2]; SumJQJt = inv(Varl) + inv(Var2); WI = inv(SumJQJt)*inv(Varl) ; W2 = inv(SumJQJt)*inv(Var2) ; ' ' W = [WI W2] ; VarNaka_w = W*QFus*W'; [w_Rot_n, VarNaka_n, NakaVarAxesV] = svd(VarNaka_w); NakaVar = VarNaka_w; % This i s Nakamura's uncertainty matrix defined i n the axes of the p r i n c i p l e % components of the e l l i p s e , Frame n. %Find the fused mean according to Nakamura FusedMean = Wl*Meanl + W2*Mean2; %***+*************************+********^ % Measurement Spacing Vectors, v i vl_w = inv(Varl A0.5)*(FusedMean - Meanl); v2_w = inv(Var2 A0.5)*(FusedMean - Mean2); ^****************+***+********** % Rotation Matrices, w_R_vi % Determine the Rotation Matrix % The f i r s t axis of the Scaling % For v l i f vl_w(l) ~= 0 & vl_w(2) ~= 0 v l S c a l i n g A x i s l = vl_w/sqrt(vl_w(1) A2 + vl_w(2) A2); % The second axis of the Scaling Frame, v l , i s perpendicular to vector v l . vlScalingAxis2 = [-vlScalingAxisl(2); v l S c a l i n g A x i s l ( 1 ) ] ; else v l S c a l i n g A x i s l = [1; 0]; % The second axis of the Scaling Frame, v l , i s perpendicular to vector v l . vlScalingAxis2 = [0; 1]; end w_Rot_vl = [ v l S c a l i n g A x i s l v l S c a l i n g A x i s 2 ] ; % For v2 i f v2_w(l) ~= 0 & v2_w(2) ~= 0 v2Scal i n g A x i s l = v2_w/sqrt(v2_w(1) A2 + v2_w(2) A2); % The second axis of the Scaling Frame, v2, i s perpendicular to vector v2. v2ScalingAxis2 = [-v2ScalingAxisl(2); v 2 S c a l i n g A x i s l ( 1 ) ] ; else v 2 S c a l i n g A x i s l = [1; 0]; % The second axis of the Scaling Frame, v l , i s perpendicular to vector v l . v2ScalingAxis2 = [0; 1]; end from the world frame to the Scaling Frame (vi) Frame, v i , i s along the vector v i . 114 w_Rot_v2 = [v2ScalingAxisl v2ScalingAxis2]; . - , % Transform the Separation Parameters to the Scaling Frames (vi) v l _ v l = w_Rot_vl'*vl_w; v2_v2 = w_Rot_v2'*v2_w; I* ************************** * * * ************** * * * ************** * * * * ******************** % Determine the Separation Vector, K, from the v i ' s %************************************************************************************* % Search for the v i with the largest magnitude, i f v l _ v l ( l ) >= v2_v2(l) vLarge = 1; else vLarge = 2; end i f vLarge == 1 % Determine the largest second component by transforming v2 and v3 i n t o frame v l . v2_vl = w_Rot_vl'*w_Rot_v2*v2_v2; w_Rot_s = w_Rot_vl; K = [ v l _ v l ( l ) ; v 2 _ v l ( 2 ) ] ; else % Determine the largest second component by transforming v l and v3 int o frame v2. vl_v2 = w_Rot_v2 ' *w_Rot_vl*vl_vl.; w_Rot_s = w_Rot_v2; K = [v2_v2(l); v l _ v 2 ( 2 ) ] ; end ^************************************************************************************* % Certainty Scaling Matrix, C %************************************************************************************* % Apply the c e r t a i n t y s c a l i n g function, fc(K), to the Separation Parameter, K, i n the % Scaling Frame, (s), which i s the frame of vLarge. % Define the Certainty Scaling Matrix, C, i n Frame s. C_s = zeros (2,2),• % The sign of K i s not important as i t scales the matrix so take the absolute value. A % negative s c a l i n g i s a f l i p but the e l l i p s e i s symetrical so a f l i p changes nothing." Kpos = abs(K); % The number of measurements being fused. m=2 ; % Apply the c e r t a i n t y s c a l i n g function, for i = 1:2 i f Kpos (i) < 1 C _ s ( i , i ) = 1.+(-11.+7.*sqrt(m))*Kpos(i) A2+(18.-7.*sqrt(m))*Kpos(i) A 3+ (-8.+2.*sqrt(m))*Kpos(i) A4; else C _ s ( i , i ) = sqrt(m)*(Kpos(i) + 1); end end ^************************************************************************************* % Fused E l l i p s e Variance, FusedVar_w (Scale NakaVar_w) %************************************************************************************* % Determine Nakamura's c e r t a i n t y e l l i p s e expressed i n Frame k. VarNaka_s = w_Rot_s'*VarNaka_w*w_Rot_s; % Apply the c e r t a i n t y s c a l i n g , C_s, to get the fused r e s u l t . FusedVar = w Rot s*C s*VarNaka s*(C s)'*w Rot s'; 115 B.1.3 Three Measurement, Two-Dimensional Redundant Fusion Function The following Matlab program was used to generate the simulation results shown in Chapter 5. % This function fuses three two-dimensional measurements based on Jason % E l l i o t t ' s method (copyright Sept 2001) Meanl = [Meanl x, Meanl y' i V a r l = [Sigmal _H Sigmal 12; Sigmal 21, Sigmal 22 Mean2 = [Mean2" x, Mean2 y; t Var2 = [Sigma2 _11 Sigma2 Sigma2 21, Sigma2 22 Mean3 = [Mean3 x, Mean3 y; r Var3 = [Sigma3 _11 Sigma3 12; Sigma3 21, Sigma3 22 % Outputs: FusedMean = [FusedMean_x, FusedMean_yJ, . % FusedVar = [FusedSigmall, FusedSigmal2; FusedSigma21, FusedSigma22] % % The Inputs and Output are expressed i n the same frame of reference as determined by % the inputs. function [FusedMean, FusedVar, NakaVar] = Fusion_3M_2D(Meanl, V a r l , Mean2, Var2, Mean3, Var3) % Rotation Matrices, i_Rot_j (the r o t a t i o n from Frame j to Frame i) • %***+*****************+******************** [w_Rot_el, V a r l _ e l , notNeeded] = svd(Varl);. [w_Rot_e2, Var2_e2, notNeeded] = svd(Var2); [w_Rot_e3, Var2_e3, notNeeded] = svd(Var3); ^ * * * * * * * * * * * * * * * * * * * * * * + * * * * * * * * * * * * * * * * * * * * * * + * % Nakamura Fusion, VarNaka, and MeanNaka (the fused mean) %***************************************^ % From Nakamura's method, determine the fused c e r t a i n t y e l l i p s e assuming no separation % of means. To determine the Q matrix the E l l i p s e s must be i n variance form (squared) % and expressed i n Frame w. QFus = [Varl zeros(2,2) zeros(2,2); zeros(2,2) V a r 2 zeros (2,2); zeros(2,2) zeros(2,2) Var3]; SumJQJt = inv(Varl) + inv(Var2) + inv(Var3); Wl = inv(SumJQJt)*inv(Varl); W2 = inv(SumJQJt)*inv(Var2) ; W3 = inv(SumJQJt)*inv(Var3) ; W = [Wl W2 W3]; VarNaka_w = W*QFus*W'; [w_Rot_n, VarNaka_n, NakaVarAxesV] = svd(VarNaka_w); NakaVar = VarNaka_w; % This i s Nakamura's uncertainty matrix defined i n the axes of the p r i n c i p l e % components of the e l l i p s e , Frame n. % Find the fused mean according to Nakamura -FusedMean = Wl*Meanl + W2*Mean2 + W3*Mean3; % Measurement Spacing Vectors, v i %+*****+******************+************************+*** ^*.* vl_w = inv(Varl A0.5)*(FusedMean - Meanl); v2_w = inv (Var2 / S0. 5) * (FusedMean - Mean2); v3_w = inv (VarS-'O. 5) * (FusedMean - Mean3) ; ^ * * * * * * * * * * + * * * * * * * * * * * + * * * * * * * * * * * * + * * * * * * * * + * * * * * * * * * % Rotation Matrices, w_R_vi . . ,; 116 % Determine the Rotation Matrix from the world frame to the Scaling Frame (vi) % The f i r s t axis of the Scaling Frame, v i , i s along the vector v i . % For v l i f vl_w(l) ~= 0 & vl_w(2) ~= 0 v l S c a l i n g A x i s l = vl_w/sqrt(vl_w(1) A2 + vl_w(2) A2); % The second axis of the Scaling Frame, v l , i s perpendicular to vector v l . vlScalingAxis2 = [-vlScalingAxisl(2); v l S c a l i n g A x i s l ( 1 ) ] ; else v l S c a l i n g A x i s l = [1; 0] ; % The second axis of the Scaling Frame, v l , i s perpendicular to vector v l . vlScalingAxis2 = [0; 1]; end w_Rot_vl = [ v l S c a l i n g A x i s l v l S c a l i n g A x i s 2 ] ; % For v2 i f v2_w(l) ~= 0 & v2_w(2) ~= 0 v2ScalingAxisl = v2_w/sqrt(v2_w(1) A2 + v2_w(2) A2); % The second axis of the Scaling Frame, v2, i s perpendicular to vector v2. v2ScalingAxis2 = [-v2ScalingAxisl(2); v 2 S c a l i n g A x i s l ( 1 ) ] ; else v 2 S c a l i n g A x i s l = [1; 0]; % The second axis of the Scaling Frame, v l , i s perpendicular to vector v l . v2ScalingAxis2 = [0; 1]; end w_Rot_v2 = [v2ScalingAxisl v2ScalingAxis2]; % For v3 i f v3_w(l) ~= 0 & v3_w(2) ~= 0 v3Sc a l i n g A x i s l = v3_w/sqrt(v3_w(1) A2 + v3_w(2) A2); % The second axis of the Scaling Frame, v3, i s perpendicular to vector v3. v3ScalingAxis2 = [-v3ScalingAxisl(2); v 3 S c a l i n g A x i s l ( 1 ) ] ; else v3ScalingAxisl = [1; 01; % The second axis of the Scaling Frame, v l , i s perpendicular to vector v l . v3ScalingAxis2 = [0; 1]; end w_Rot_v3 = [v3ScalingAxisl v3ScalingAxis2]; % Transform the Separation Parameters to the Scaling Frames (vi) v l _ v l = w_Rot_vl'*vl_w; v2_v2 = w_Rot_v2'*v2_w; v3_v3 = w_Rot_v3'*v3_w; %******************************************^ % Determine the Separation Vector, K, from the v i ' s % Search for the v i with the largest magnitude ( c a l l e d vLarge). i f ( v l _ v l ( l ) >=v2_v2(l)) & ( v l _ v l ( l ) >=v3_v3(l)) vLarge = 1; e l s e i f (v2_v2(l) > = v l _ v l ( l ) ) & (v2_v2(l) >=v3_v3(l)) vLarge = 2; else vLarge = 3; end i f vLarge == 1 % Determine the largest second component by transforming v2 and v3 int o frame v2_vl = w_Rot_vl'*w_Rot_v2*v2_v2; v3_vl = w_Rot_vl'*w_Rot_v3*v3_v3; w_Rot_s = w_Rot_vl; i f abs(v2_vl(2)) > abs(v3_vl(2)) K = [vl v l (1) ; v2 v l (2) ] ; 117 else K= [ v l _ v l ( l ) ; v3_vl(2)]; end elseif vLarge == 2 % Determine the largest second component by transforming vl and v3 into frame v2. vl_v2 = w_Rot_v2'*w_Rot_vl*vl_vl; v3_v2 = w_Rot_v2'*w_Rot_v3*v3_v3; w_Rot_s = w_Rot_v2; i f abs(vl_v2(2)) > abs(v3_v2(2)) K= [v2_v2(l); vl_v2(2)]; else K= [v2_v2(l); v3_v2(2)]; end else % Determine the largest second component by transforming vl and v2 into frame v3. vl_v3 = w_Rot_v3'*w_Rot_vl*vl_vl; v2_v3 = w_Rot_v3'*w_Rot_v2*v2jv2; w_Rot_s = w_Rot_v3; i f abs(vl_v3(2)) > abs(v2_v3(2)) ... K = [v3_v3(l); vl_v3(2)]; else K= [v3_v3(l); v2_v3(2)]; end end % Certainty Scaling Matrix, C %*********************************^ % Apply the certainty scaling function, fc(K),'to the Separation Parameter, K, in the % Scaling Frame, (s), which is the frame of vLarge. % Define the Certainty Scaling Matrix, C, in Frame k. C_s = zeros (2,2); % The sign of K is not important as i t scales the matrix so take the absolute value. A % negative scaling is a f l i p but the ellipse is symetrical so a f l i p changes nothing. Kpos = abs(K); % The number of measurements being fused. m=3 ; % Apply the certainty scaling function, for i = 1:2 i f Kpos(i) < 1 C_s(i,i) = 1.+(-11.+7.*sqrt(m))*Kpos(i)A2+(18.-7.*sqrt(m))*Kpos(i)A3+ (-8.+2.*sqrt(m))*Kpos(i)A4; else C_s(i,i) = sqrt(m)*(Kpos(i) + 1); end end **********+*+*+************+*********+*********^ % Fused Ellipse Variance, SigmaE'us_w (Scale SigmaNaka_w) %*********************************+*********+*******^  % Determine Nakamura's certainty ellipse expressed in Frame s. ' . VarNaka_s = w_Rot_s'*VarNaka_w*w_Rot_s; % Apply the certainty scaling, C_s, to get the fused result. FusedVar = w Rot s*C s*VarNaka s*(C s)'*w Rot s'; 118 B.2 Test Suites B.2.1 Two Measurement, One-Dimensional Redundant Fusion Test Suite The following is the Matlab code that was used to generate the two one-dimensional measurement redundant fusion simulation examples in Chapter 5. % This f i l e contains a t e s t s u i t e for fusing two one-dimensional measurements. % Copyright Sept 2001 % Written by Jason E l l i o t t c l e a r ; % Specify the means and variances of the measurements ([meanl; mean2], [ v a r l ; v ar2]). Mean = [[5;5], [5;5.5], [5;6], [5;6.5], [5;7], [5;10], [5;8], [5;8], [5;8], [5;8]]; Var = [[1;1], [1,-1], [1;1], [1;1], [1;1], [1;1], [1;1], [1;2], [1;4], [1;10]]; % Fuse the measurements, for i=l:length(Mean(1,:)) [FusedMean(i), FusedVar(i)] = Fusion_2M_lD(Mean(1,i), V a r ( l , i ) , Mean ( 2 , i ) , V a r ( 2 , i ) ) ; NakaVar(i) = 1/((1/Var(1,i))+(1/Var(2,i))); end % Plot the r e s u l t s , x = [0: .1:17] ; for i=l:length(Mean(1,:)) f i g u r e ; hold on; : ' measl = (1/sqrt(2*3.14*Var(1,i)))*exp(-((x-Mean(1,i)). A2/(2*(Var(1,i))))); meas2 = (1/sqrt(2*3.14*Var(2,i)))*exp(-((x-Mean(2,i))."2/(2*(Var(2,i))))); fus = (1/sqrt(2*3.14*FusedVar(i)))*exp(-(x-FusedMean(i)). A2/(2*FusedVar(i))); Naka = (1/sqrt(2*3.14*NakaVar(i)))*exp(-(x-FusedMean(i))."2/(2*NakaVar(i))); p l o t ( x , measl, ' k — ' ) ; p l o t ( x , meas2, ' k — ' ) ; p l o t ( x , fus, 'k-'); p l o t ( x , Naka, 'k:'); hold o f f ; end B.2.2 Two Measurement, Two-Dimensional Redundant Fusion Test Suite The following is the Matlab code that was used to generate the two two-dimensional measurement redundant fusion simulation examples in Chapter 5. % This f i l e contains a t e s t s u i t e f or fusing two two-dimensional measurements. % Copyright Sept 2001 % Written by Jason E l l i o t t c l e a r ; % Specify the means and variances of the measurements ([mean_x; mean_y], % [var_x, var_xy; var_yx, var_y]). Meanl = [ [ 5 ; 5 ] , [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [5;5], [5;5]]; Mean2 = [[5;5], [6;5], [7;5], [10;5], [5.707;5.707], [ 6.414;6.414], [5+12.5"0.5; 5+12.5 A0.5], [7;4], [6.5;7], [8;9], [5;5.5]]; V a r l = [[1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 1; 1 9 ] , [4 1; 1 1 ] , [9 1. 5; 1. 5 ,1 ] , [9 1. 5; 1. 5 1 ] ] ; Var2 = [[1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0;0 1], [1 0.1; 0.1 4], [1 1;1 8], [1 1;1 4], [1 1;1 4]]; 119 % Fuse the measurements, fo r i=l:length(Meanl(1,:)) [FusedMean(1:2,i), FusedVar(1:2, ( 2 * i ) - 1 : 2 * i ) , NakaVar(1:2, ( 2 * i ) - 1 : 2 * i ) ] = Fusion_2M_2D(Meanl(1:2,i), V a r l ( 1 : 2 , ( 2 * i ) - l : 2 * i ) , Mean2(1:2,i), Var2(1:2, ( 2 * i ) - l : 2 * i ) ) ; end % Plot the r e s u l t s . for i=l:length(Meanl(1, :) ) % Determine the p r i n c i p l e axes for p l o t t i n g purposes [w_Rot_el, V a r l _ e l , notneeded] = svd ( V a r l ( 1 : 2 , ( 2 * i ) - 1 : 2 * i ) ) ; [w_Rot_e2, Var2_e2, notneeded] = svd(Var2(1:2,(2 + i ) - 1 : 2 * i ) ) ; [w_Rot_f, FusedVar_f, notneeded] = svd(FusedVar(1: 2,(2*i)-1:2*i)); [w_Rot_n, NakaVar_n, notneeded] = svd(NakaVar(1:2,(2*i)-1:2*i)); 0 . * * * * * * * * * * * * * * * * * * + * * * * * * * * * * * * * ^ % P l o t t i n g the E l l i p s e ' s means and p r i n c i p l e axes %***************+*+***************^ fig u r e ; hold on; % NOTE: The p l o t t i n g of a l l e l l i p s e s are not shown here. The others are p l o t t e d % s i m i l a r i l y . % Plot the center of Nakamura's e l l i p s e plot(FusedMean(1,i), FusedMean(2,i), 'kh'); % Plot the e l i p s e p r i n c i p l e axis r e s u l t i n g from the svd for the fused r e s u l t , plot((FusedMean(1,i)+FusedVar_f(1,1) A0.5*w_Rot_f(1,1)), (FusedMean(2,i)+FusedVar_f(1,1) A0.5*w_Rot_f(2,1)),'ko'); plot((FusedMean(1,i)-FusedVar_f(1,1) A0.5*w_Rot_f(1,1)), (FusedMean(2,i)-FusedVar_f(1,1) A0.5*w_Rot_f(2, 1)) , 'ko ' ) ; plot((FusedMean(1,i)+FusedVar_f(2,2) A0.5*w_Rot_f(1,2)), (FusedMean(2,i)+FusedVar_f(2,2) A0 .5*w_Rot_f(2, 2)),'ko'); plot((FusedMean(1,i)-FusedVar_f(2,2) A0.5*w_Rot_f(1,2 ) ) , (FusedMean(2,i)-FusedVar_f(2,2) A0 . 5*w__Rot_f (2, 2)),'ko'); ^*********************************^ % Plot the e l l i p s e curves % P l o t t i n g the F i n a l Fused e l l i p s e i n the xy plan with a r o t a t i o n % Determine the angle of rot a t i o n x2 = [FusedMean(1,i)-FusedVar_f(1,1) A0.5:.05:FusedMean(1, i)+FusedVar_f(1,1) A0.5] i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)>=0) RotAngle = atan (w_Rot_f (2, 1)/w_Rot__f (1, 1) ) ; e l s e i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); e l s e i f (w_Rot_f(2,1)<=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); else RotAngle = atan(w_Rot_f(2,1)/w_Rot_f(1,1)); end RotAngle = -RotAngle; S i = sin(RotAngle); C = cos(RotAngle); a = FusedVar_f(1,1) A0.5; b = FusedVar_f(2,2) A0.5; X= FusedMean(1,i); Y= FusedMean(2,i); % From Maple c a l c u l a t i o n y P o s i t i v e = (-Si*b A2*C*X+C*a A2*Si*X+a A2*Y*C A2+b A2*Y-b A2*Y*C A2-C*a A2*Si*x2+Si*b A2*C*x2-(b A2*a A2*(a A2*C A2+2*X*x2-X A2-x2. A2+b A2-120 b A2*C A2) ) . A (1/2) ) / (b A2-b A2*C A2+a A2*C A2) ; yNegative = (-Si*b A2*C*X+C*a A2*Si*X+a A2*Y*C A2+b A2*Y-b A2*Y*C A2-C*a A2*Si*x2+Si*b A2*C*x2+(b A2*a A2*(a A2*C A2+2*X*x2-X A2-x2. A2+b A2-b A2*C A2)) . A (1/2))/(b A2-b A2*C A2+a A2*C A2); % Plot the e l l i p s e plot(x2, yPositive, 'k-'); plot(x2, yNegative, 'k-'); hold o f f ; end B.2.3 Three Measurement, Two-Dimensional Redundant Fusion Test Suite The following is the Matlab code that was used to generate the three two-dimensional measurement redundant fusion simulation examples in Chapter 5. % Written by Jason E l l i o t t Copyright Oct 2001 c l e a r ; % Specify the means and variances of the measurements ([mean_x; mean_y], % [var_x, var_xy; var_yx, var_y]). Meanl = [[5;5], [3;3], [5;5], [5;5]]; Mean2 = [[4.5;5.1], [6;5.5], [7;5], [5.4;5.2]]; Mean3 = [[5.5;5.5], [6.3;5], [5;7], [4.9;5.7]]; V a r l = [[1 0;0 1], [1 0;0 1], [1 0;0 1], [1 1;1 6]]; Var2 = [[1 0;0 1], [2 1.3/1.3 3], [1 0;0 1], [3 .7;.7 1.5]]; Var3 = [[1 0;0 1], [1 0;0 9], [1 0;0 1], [4 0;0 2]]; % Fuse the measurements, for i=l:length(Meanl(1,:)) [FusedMean(1:2,i), FusedVar(1:2, ( 2 * i ) - 1 : 2 * i ) , NakaVar(1:2, ( 2 * i ) - 1 : 2 * i ) ] = Fusion_3M_2D(Meanl(1:2,i), Varl(1:2, ( 2 * i ) - 1 : 2 * i ) , Mean2(1:2,i), Var2 (1:2, ( 2 * i ) - l : 2 * i ) , Mean3(1:2,i), Var3(1:2, (2*i)-1 : 2 * i ) ) ; end % Plot the r e s u l t s . for i=l:length(Meanl(1,:)) % Determine the p r i n c i p l e axes for p l o t t i n g purposes [w_Rot_el, V a r l _ e l , notneeded] = svd(Varl(1:2, ( 2 * i ) - 1 : 2 * i ) ) ; [w_Rot_e2, Var2_e2, notneeded] = svd(Var2(1:2,(2*i)-1:2*i)); [w_Rot_e3, Var3_e3, notneeded] = svd(Var3(1:2,(2*i)-1:2*i)); [w_Rot_f, FusedVar_f, notneeded] = svd(FusedVar(1:2, (2*i)-1:2 * i ) ) ; [w_Rot_n, NakaVar_n, notneeded] = svd(NakaVar(1:2,(2*i)-1:2*i)); * * * * * * * * * * * * * * * * * * % P l o t t i n g the E l l i p s e ' s means and p r i n c i p l e axes %************************************************************************* f i g u r e ; hold on; % Plot the center of e l l i p s e l p l ot(Meanl(1,i), Meanl(2,i), 'k+'); % Plot e l l i p s e l ' s p r i n c i p l e axes plot((Meanl(1,i)+Varl_el(1,1) A0.5*w_Rot_el(1,1)),(Meanl(2,i)+Varl_el(1,1) A 0.5*w_Rot_el(2,l)),'ko'); plot((Meanl(1,i)-Varl_el(1,1) A0.5*w_Rot_el(1,1)),(Meanl(2,i)-Varl_el(1,1) A0.5*w_Rot_el(2,1)),'ko'); plot((Meanl(1,i)+Varl_el(2,2) A0.5*w_Rot_el(1,2)),(Meanl(2,i)+Varl_el(2,2) A 0.5*w_Rot_el(2,2)),'ko'); plot((Meanl(1,i)-Varl_el(2,2) A0.5*w_Rot_el(1,2)),(Meanl(2,i)-V a r l el(2,2) A0.5*w Rot e l (2,2)),'ko'); 121 % Plot the e l l i p s e curves % P l o t t i n g the F i n a l Fused e l l i p s e i n the xy plan with a r o t a t i o n % determine the angle of r o t a t i o n x2 = [FusedMean(1,i) - FusedVar_f(1,1) A0.5 : .05 : FusedMean(1,i) + FusedVar_f(1,1) A0. 5] ; i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)>=0) RotAngle = atan(w_Rot_f(2,1)/w_Rot_f(1,1)); e l s e i f (w_Rot_f(2,1)>=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); e l s e i f (w_Rot_f(2,1)<=0 & w_Rot_f(1,1)<=0) RotAngle = pi+atan(w_Rot_f(2,1)/w_Rot_f(1,1)); else RotAngle = atan(w_Rot_f(2,1)/w_Rot_f(1,1)); end RotAngle = -RotAngle; S i = sin(RotAngle); C = cos(RotAngle); a = FusedVar_f(1,1) A0.5; b = FusedVar_f(2,2) A0.5; X= FusedMean(1,i); Y= FusedMean(2,i); % From Maple c a l c u l a t i o n y P o s i t i v e = (-Si*b A2*C*X+C*a A2*Si*X+a A2*Y*C A2+b A2*Y-b A2*Y*C A2-C*a A2*Si*x2+Si*b A2*C*x2-(b A2*a A2*(a A2*C A2+2*X*x2-X A2-x2. A2+b A2-b A2*C A2)) . A (1/2))/(b A2-b A2*C A2+a A2*C A2); yNegative = (-Si*b A2*C*X+C*a A2*Si*X+a A2*Y*C A2+b A2*Y-b A2*Y*C A2-C*a A2*Si*x2+Si*b A2*C*x2+(b A2*a A2*(a A2*C A2+2*X*x2-X A2-x2. A2+b A2-b A2*C A2)) . A (1/2))/(b A2-b A2*C A2+a A2*C A2); % Plot the e l l i p s e plot(x2, y P o s i t i v e , 'k-'); plot(x2, yNegative, 'k-'); % NOTE: The code used to plot a l l the e l l i p s e s i s not shown. It i s % s i m i l a r to what i s shown above, hold o f f ; end 122 Appendix C C Calibration Data C.1 Reference Frame Calibration The following table includes the measurement data collected when determining the offset between the established world reference frame and the sensor's frame of reference. Measurement Pointer Locator Encoder Ultrasound X Offset (mm) Y Offset (mm) X Offset (mm) Y Offset (mm) 1 94.38 177.57 132.25 207.40 2 94.49 178.50 132.17 208.52 3 94.50 178.56 132.20 208.55 4 94.38 178.49 132.17 207.38 5 94.41 178.41 132.15 207.40 6 94.49 178.21 132.20 207.99 7 94.53 178.26 132.22 207.96 8 94.50 177.83 132.07 207.40 9 94.50 177.97 132.14 208.55 10 94.49 177.98 132.10 208.55 11 94.45 177.80 132.10 207.38 12 _^ 94.46 177.92 132.05 208.55 13 94.44 177.80 132.10 207.99 14 94.41 177.95 132.05 207.38 15 94.47 178.05 132.10 207.38 16 94.52 177.88 132.14 208.55 17 94.52 177.83 132.02 207.48 18 94.50 177.83 132.07 207.53 19 94.51 177.72 132.15 207.48 20 94.43 177.67 132.10 206.87 Mean 94.47 178.01 132.13 207.81 Variance 0.0022 0.0863 0.0040 0.3013 T a b l e d : Calibration Data. 123 The accuracy of the measuring devices used is included in the table below. Comparison of these figures with the variances of the XY Table sensors reveals that these measuring devices contribute very little uncertainty to the quantification of the sensor variances. Measuring Device Accuracy Verniers +/- 0.02 mm Dial Guage +/- 0.001 in. Height Guage +/- 0.001 in. Table C.2: Measuring Device Accuracies. C.2 Sensor Measurement Var iances Table C.3 includes the data collected from the repeated measurement trials used to establish the variance of the encoder, ultrasound and pointer locator sensors. The variance was determined by comparing the measurements to the true value established by dial guage measurements. Position True Value ( (Dial Guage) Pointer Locator Encoder Ultrasound X (mm) Y (mm) X (mm) Y (mm) X (mm) Y (mm) 1 223.1961 309.1038 220.7125 307.5083 220.1519 307.8867 2 243.7701 329.8048 244.0380 330.9971 242.8905 331.7732 3 303.2061 309.4848 305.8111 306.5793 304.4156 306.0959 4 304.9333 254.9002 306.7383 256.0492 304.8347 254.4869 5 245.7767 277.7094 246.4088 275.5989 245.5668 275.6808 6 214.0013 300.4424 211.0839 300.6071 210.9690 301.4896 Variance N/A N/A. 4.1990 3.0350 3.4587 3.7039 Table C.3: Sensor Variance Data. 124 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0090205/manifest

Comment

Related Items