UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Robust and autonomous multi-robot cooperation using an artificial immune system Khan, Muhammad Tahir 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2010_fall_khan_muhammad.pdf [ 3.14MB ]
Metadata
JSON: 24-1.0069726.json
JSON-LD: 24-1.0069726-ld.json
RDF/XML (Pretty): 24-1.0069726-rdf.xml
RDF/JSON: 24-1.0069726-rdf.json
Turtle: 24-1.0069726-turtle.txt
N-Triples: 24-1.0069726-rdf-ntriples.txt
Original Record: 24-1.0069726-source.json
Full Text
24-1.0069726-fulltext.txt
Citation
24-1.0069726.ris

Full Text

Robust and Autonomous Multi-Robot Cooperation Using an Artificial Immune System by Muhammad Tahir Khan B.Sc. Mechanical Engineering, NWFP University of Engineering and Technology, 1997 M.EngSc. Mechatronics, University of New South Wales, 1999 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Mechanical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) April 2010 © Muhammad Tahir Khan, 2010  ABSTRACT  This thesis investigates autonomous and fault-tolerant cooperative operation and intelligent control of multi-robot systems in a dynamic, unstructured, and unknown environment. It makes significant original contributions pertaining to autonomous robot cooperation, dynamic task allocation, system robustness, and real-time performance. The thesis develops a fully autonomous and fault tolerant distributed control system framework based on an artificial immune system for cooperative multi-robot systems. The multi-robot system consists of a team of heterogeneous mobile robots which cooperate with each other to achieve a global goal while resolving conflicts and accommodating full and partial failures in the robots. In this framework, the system autonomously chooses the appropriate number of robots required for carrying out the task in an unknown and unpredictable environment. An artificial immune system (AIS) approach is incorporated into the multi-robot system framework, which will provide robust performance, self-deterministic cooperation, and coping with an inhospitable environment. Based on the structure of the human immune system, immune response, immune network theory, and the mechanisms of interaction among antibody molecules, the robots in the team make independent decisions, coordinate, and if required cooperate with each other to accomplish a common goal. As needed for application in cooperative object transportation by mobile robots, the thesis develops a new method of object pose estimation. In this method, a CCD camera, optical encoders, and a laser range finder are the sensors used by the robots to estimate the pose of the detected object. The thesis also develops a market-based algorithm for autonomous multi-robot cooperation, which is then used for comparative evaluation of the performance of the developed AIS-based system framework. In order to validate the developed techniques, a Java-based simulation system and a physical multi-robot experimental system are developed. This practical system is intended to transport multiple objects of interest to a goal location in a dynamic and  ii  unknown environment with complex static and dynamic obstacle distributions. The approaches developed in this thesis are implemented in the prototype system in our laboratory and rigorously tested and validated through both computer simulation and physical experimentation.  iii  TABLE OF CONTENTS Abstract…………… ........................................................................................................... ii Table of Contents............................................................................................................... iv List of Tables .................................................................................................................... vii List of Figures ................................................................................................................. viii Nomenclature ................................................................................................................... xiii Acknowledgements.......................................................................................................... xvi Chapter 1 Introduction .....................................................................................................1 1.1 Goals of the Research ............................................................................................3 1.2 Problem Definition ................................................................................................3 1.3 Design Requirements for a Cooperative Control Framework ...............................5 1.3.1 Flexibility....................................................................................................5 1.3.2 Robustness and Fault Tolerance .................................................................6 1.3.3 Local Sensing Capabilities of the Robots ...................................................7 1.4 Related Work .........................................................................................................7 1.4.1 Multi-Robot Cooperation............................................................................8 1.4.2 Multi-Robot Coordination in Robot Soccer Teams ..................................10 1.4.3 Fault Tolerance and Multi-Robot Cooperation.........................................11 1.4.4 Market-Based Multi-Robot Systems ........................................................12 1.4.5 Artificial Immune System Based Multi-Robot Systems ..........................14 1.4.6 Object Pose Estimation .............................................................................17 1.5 Contributions and Organization of the Thesis .....................................................19 Chapter 2 Modeling the Immune System ......................................................................22 2.1 Biological Immune System .................................................................................22 2.1.1 Anatomy of the Immune System ..............................................................22 2.1.2 How the Immune System Works ..............................................................23 2.1.3 Idiotypic Network Theory ........................................................................25 2.2 Artificial Immune System....................................................................................27 2.2.1 Modeling of the Idiotypic Immune Network ............................................29 2.2.2 Properties of the Artificial Immune System .............................................31 Chapter 3 Application of Artificial Immune System in Multi-Robot Cooperation...33 3.1 System Development ...........................................................................................33 3.1.1 Assumptions .............................................................................................34 3.1.2 AIS and Multi-Robot Cooperation ...........................................................34 3.2 Overall Multi-Robot Cooperative System ...........................................................35 3.2.1 Coordination Among Antibodies Before Elimination……...…………...38 3.2.2 Fault Tolerance .........................................................................................39 3.2.3 Binding Affinity Function ........................................................................41 3.2.4 Cooperation Between Antibodies to Eliminate an Antigen ......................43  iv  3.3 Simulation Study .................................................................................................44 3.3.1 Effect of the Number of Antibodies .........................................................47 3.3.2 Effect of Robot Failure on the System .....................................................50 3.4 Physical Experiments ..........................................................................................56 3.5 Multi-Object Transportation ................................................................................62 3.6 Test Environment of Multi-Robot Cooperation ..................................................62 3.7 Immune System and Multi-Robot Cooperation...................................................63 3.7.1 Antibody and Robot..................................................................................64 3.7.2 Idiotypic Network Model and Multi-Robot Cooperation .........................64 3.8 Simulation Study for Multi-Object Transportation .............................................69 3.8.1 Effect of the Number of Antibodies .........................................................69 3.8.2 Fault Tolerance .........................................................................................74 3.9 Summary ..............................................................................................................83 Chapter 4 Comparison with Market Based Multi-Robot Cooperation......................84 4.1 Market Based Approaches ...................................................................................84 4.1.1 Auction .....................................................................................................85 4.2 Auction-Based Multi-Robot Cooperation............................................................86 4.2.1 Assumptions .............................................................................................86 4.3 Auction-Based Algorithms for Multi-Robot Cooperation...................................87 4.3.1 Auction Algorithm ....................................................................................88 4.3.2 Single Robot Task Execution Algorithm..................................................90 4.3.3 Cooperative Task Execution Algorithm ...................................................91 4.3.4 Fault Tolerance .........................................................................................92 4.3.5 Algorithm When Auctioneer Robot Fails Before Announcing the Winner ...........................................................................................................................93 4.3.6 Algorithm When Auctioneer Robot Fails After Announcing the Winner ...........................................................................................................................95 4.3.7 Algorithm for Failure of Winner Robot....................................................96 4.3.8 Algorithm for Robot Failure During Single Robot Task Execution.........97 4.3.9 Algorithm for Failure of Leader Robot During Cooperative Task Execution ...........................................................................................................97 4.4 Bidding Function .................................................................................................99 4.5 Simulation Study ...............................................................................................100 4.5.1 Discussion ...............................................................................................109 4.6 AIS-Versus Market-Based Multi-Robot Cooperation .......................................110 4.6.1 Common Elements..................................................................................111 4.7 Comparative Analysis ........................................................................................111 4.7.1 Solution Quality ......................................................................................112 4.7.2 Communication Requirement .................................................................118 4.7.3 Discussion ...............................................................................................123  v  4.8 Summary ............................................................................................................123 Chapter 5 Object Pose Estimation ...............................................................................124 5.1 Overview............................................................................................................124 5.2 Test Bed .............................................................................................................128 5.3 Global Localization of Robot ............................................................................129 5.4 Color Blob Tracking ..........................................................................................131 5.5 Object Pose Estimation ......................................................................................132 5.5.1 Relative Pose Estimation ........................................................................133 5.5.2 Object Global Pose Estimation ...............................................................135 5.6 Experimental Validation ....................................................................................136 5.6.1 Experiments with the Stationary Robot ..................................................137 5.6.2 Experiments with a Moving Robot .........................................................142 5.7 Summary ............................................................................................................146 Chapter 6 Conclusions ..................................................................................................148 6.1 Meeting Design Requirements ..........................................................................148 6.1.1 Flexibility................................................................................................148 6.1.2 Robustness and Fault Tolerance .............................................................149 6.1.3 Local Sensing Capabilities of the Robots ...............................................150 6.2 Primary Contribution…...…………………………..…………………………150 6.3 Limitations and Suggested Future Research…………………………………..152 Bibliography ...................................................................................................................154 Appendix: Experimental Test Bed .................................................................................162 A.1 A Cooperative Multi-Robot Simulation Platform ............................................163 A.1.1 Design requirements ..............................................................................164 A.1.2 Sensors and Obstacle Avoidance ...........................................................164 A.2 Physical Test Bed..............................................................................................165 A.2.1 The Pool of Robots ................................................................................166 A.2.2 Sensors ...................................................................................................169 A.3 Summary ...........................................................................................................170  vi  LIST OF TABLES Table 5.1 The actual and estimated object pose results from the first set of experiments with a stationary robot ............................................................................... 138 Table 5.2 The actual and the estimated object pose results from the second set of experiments with a stationary robot .......................................................... 140 Table 5.3 The actual and the estimated object pose results from the first set of experiments with a moving robot .............................................................. 142 Table 5.4 The actual and the estimated object pose results from the second set of experiments with a moving robot .............................................................. 144  vii  LIST OF FIGURES  Figure 1.1  Schematic representation of the developed system ....................................... 4  Figure 2.1  The structure of an antibody........................................................................ 24  Figure 2.2  Cooperation of antibodies to eliminate an antigen ...................................... 24  Figure 2.3  Jerne's idiotypic network ............................................................................. 27  Figure 3.1  Antibody (robot) light and heavy chains ..................................................... 35  Figure 3.2  Control framework of AIS-based multi-robot cooperatio ........................... 37  Figure 3.3  Epitope and paratope representation ........................................................... 38  Figure 3.4  Orientation and obstacle detection within the detection radius................... 42  Figure 3.5  Simulation environment .............................................................................. 46  Figure 3.6  Effect of the number of antibodies (robots) on the communication burden during coordination to determine a suitable robot for task cooperation...... 48  Figure 3.7  Effect of the number of antibodies (robots) on the time during coordination to determine a suitable robot that can cooperate with the initiating antibody ..................................................................................................................... 48  Figure 3.8  Effect of the number of antibodies (robots) on the communication burden during elimination of an object antigen (transportation to a goal location) 50  Figure 3.9  Effect of the number of antibodies (robots) on the time (steps) during elimination of an object antigen (transportation to a goal location)............ 50  Figure 3.10 Communication burden due to partial or full failure of helping antibody that approaches the object antigen to cooperate with the initiating antibody .... 52 Figure 3.11 Effect on the time (step) due to partial or full failure of the helping robot that approaches the object antigen in order to cooperate with the initiating antibody ....................................................................................................... 52 Figure 3.12 Communication burden due to partial or full failure of the initiating antibody ....................................................................................................... 54 Figure 3.13 Effect on the time (step) due to partial or full failure of the helping antibody ..................................................................................................................... 54  viii  Figure 3.14 Communication burden due to partial or full failure of a cooperating antibody during elimination of an object antigen ........................................ 55 Figure 3.15 Effect on time (steps) due to partial or full failure of a cooperating antibody during elimination of an object antigen ....................................................... 56 Figure 3.16 Experimental platform for multi-robot cooperation .................................... 57 Figure 3.17 Multi-robot cooperative object transportation in a real environment ......... 59 Figure 3.18 The multi-robot multi-object simulation platform ...................................... 63 Figure 3.19 Light and heavy chains of an antibody (robot) ........................................... 64 Figure 3.20 Antibody-antigen and antibody-antibody stimulation and suppression ...... 65 Figure 3.21 Antibody paratope and antigen epitope matching ....................................... 65 Figure 3.22 Antibody-antibody idiotope and paratope matching ................................... 66 Figure 3.23 Effect of the number of antibodies (robots) on the time during coordination to determine a suitable robot that can cooperate with the initiating antibody ..................................................................................................................... 70 Figure 3.24 Effect of the number of antibodies (robots) on the time (steps) during elimination of an antigen (transportation to a goal location) ...................... 71 Figure 3.25 Effect of the number of antibodies (robots) on the communication burden during coordination to determine a suitable antibody for task cooperation.. ..................................................................................................................... 72 Figure 3.26 Effect of the number of antibodies (robots) on the communication burden .................................................................................................................... .72 Figure 3.27 Robot entrapment with the increase in the number of antibodies ............... 73 Figure 3.28 Average time (steps) incurred to eliminate all antigens .............................. 74 Figure 3.29 Average number of messages incurred to eliminate all antigens ................ 74 Figure 3.30 Effect on execution time (step) due to partial or full failure of helping antibody that approaches the antigen in to cooperate with initiating antibody ..................................................................................................................... 76 Figure 3.31 Communication burden due to partial or full failure of helping antibody that approaches the antigen to cooperate with initiating antibody ..................... 77 Figure 3.32 Effect on execution time (step) due to partial or full failure of initiating antibody ....................................................................................................... 79  ix  Figure 3.33 Communication burden due to partial or full failure of initiating antibody.79 Figure 3.34 Effect on execution time (steps) due to partial or full failure of a cooperating antibody during elimination of an antigen............................... 80 Figure 3.35 Communication burden due to partial or full failure of a cooperating antibody during elimination of an antigen .................................................. 81 Figure 3.36 Effect on execution time (steps) due to partial or full failure of an antibody during the elimination of an antigen by a single antibody .......................... 82 Figure 3.37 Communication burden due to partial failure of an antibody during the elimination of an antigen by a single antibody............................................ 82 Figure 4.1 The multi-robot multi-object simulation platform ..................................... 100 Figure 4.2 Effect on execution time (steps) due to partial or full failure of auctioneer robot........................................................................................................... 102 Figure 4.3 Communication burden due to partial or full failure of auctioneer robot .. 102 Figure 4.4 Effect on execution time (steps) due to partial or full failure of winner robot ................................................................................................................... 103 Figure 4.5 Communication burden due to partial or full failure of winner robot ........ 104 Figure 4.6 Effect on execution time (steps) due to partial or full failure of leader robot ................................................................................................................... 105 Figure 4.7 Communication burden due to partial or full failure of leader robot ......... 106 Figure 4.8 Effect on execution time (step) due to failure of follower robot ................ 107 Figure 4.9 Communication burden due to failure of follower robot ........................... 107 Figure 4.10 Effect on execution time (steps) due to partial or full failure of a robot during the transportation of an object by a single robot ............................ 108 Figure 4.11 Communication burden due to partial or full failure of a robot during transportation of an object by a single robot ............................................. 109 Figure 4.12 Comparison of time taken during the coordination process of selecting suitable partners ......................................................................................... 113 Figure 4.13 Comparison of time taken due to partial failure of auctioneer/initiator robot ................................................................................................................... 114 Figure 4.14 Comparison of time taken due to full failure of auctioneer/initiator robot 114 Figure 4.15 Comparison of time taken due to partial failure of winner/helper robot ... 115  x  Figure 4.16 Comparison of time taken due to full failure of auctioneer/initiator robot 115 Figure 4.17 Comparison of time taken during object transportation ............................ 116 Figure 4.18 Comparison of time taken due to partial failure of leader robot (marketbased) or of a cooperating robot (AIS-based) ........................................... 117 Figure 4.19 Comparison of time taken due to full failure of leader robot (market-based) or of a cooperating robot (AIS-based) ....................................................... 117 Figure 4.20 Comparison of communication burden during the coordination process of selecting suitable partners ......................................................................... 118 Figure 4.21 Comparison of communication burden due to partial failure of auctioneer/initiator robot ........................................................................... 119 Figure 4.22 Comparison of communication burden due to full failure of auctioneer/initiator robot ........................................................................... 120 Figure 4.23 Comparison of communication burden due to partial failure of winner/helper robot ................................................................................... 121 Figure 4.24 Comparison of communication burden due to full failure of winner/helper robot........................................................................................................... 121 Figure 4.25 Comparison of communication burden due to partial failure of leader robot (market-based) or of a cooperating robot (AIS-based) ............................. 122 Figure 4.26 Comparison of time (steps) executed due to full failure of leader robot (market-based) or of a cooperating robot (AIS-based) ............................. 122 Figure 5.1  General scheme of object pose estimation for cooperative object transportation ............................................................................................. 127  Figure 5.2  Pioneer P3-DX robot ................................................................................. 128  Figure 5.3  Global pose estimation of a wheeled robot ............................................... 130  Figure 5.4  Division of camera frame into four quadrants .......................................... 132  Figure 5.5  Laser range sensor ..................................................................................... 133  Figure 5.6  Relative object pose estimation ................................................................. 135  Figure 5.7  Arbitrary layout of objects ........................................................................ 137  Figure 5.8  The x-axis error from Table 5.1 ................................................................ 138  Figure 5.9  The y-axis error from Table 5.1 ................................................................ 139  Figure 5.10 The orientation error from Table 5.1 ......................................................... 139  xi  Figure 5.11 The x-axis error from Table 5.2 ................................................................ 140 Figure 5.12 The y-axis error from Table 5.2 ................................................................ 141 Figure 5.13 The orientation error from Table 5.2 ......................................................... 141 Figure 5.14 The x-axis error from Table 5.3 ................................................................ 143 Figure 5.15 The y-axis error from Table 5.3 ................................................................ 143 Figure 5.16 The orientation error from Table 5.3 ......................................................... 144 Figure 5.17 The x-axis error from Table 5.4 ................................................................ 145 Figure 5.18 The y-axis error from Table 5.4 ................................................................ 145 Figure 5.19 The orientation error from Table 5.4 ......................................................... 146 Figure A.1 Simulation platform for multi-robot cooperation ....................................... 163 Figure A.2 The multi-robot object transportation system ............................................ 165 Figure A.3 P3-DX robot ............................................................................................... 167 Figure A.4 P3-AT robot................................................................................................ 168 Figure A.5 Sonar arrangement on P-3DX/AT .............................................................. 169  xii  NOMENCLATURE Notations G  Strength of possible reaction between epitope and paratope  S  Threshold Value  δ (Chapter 2)  Number of complementary bits in excess to threshold value  mij  Matching function representing the degree of recognition for suppression  {x1 , x2 ,...., x N }  N number of antibodies  {y1 ,....., y n }  n number of antigens  β = f ( L 2 p , d pa )  Binding affinity function of the light chain L2 and Euclidean distance between the antibody and antigen  ormn  Orientation of the antibody  obmn  Obstacles in the path between the antibody and the antigen  sr  Successful eliminations of the particular antigen by an antibody  v  Velocity of the antibody  α  Stimulation of antibody xi in response to the lone antigen y j  A ji  Matching function between the antibody and the antigen  δ (Chapter 3)  Stimulation of the antibodies xi in response to the antibody x j  S ji  Matching function representing the degree of recognition for  τ  stimulation Suppression of the antibody xi in response to all other antibodies x j  Pij  Matching function representing the degree of recognition for suppression  xiii  ξ (Chapter 3)  Antibody failure  k  Stimulus rate at which the malfunctioned antibody xi is declared as failed  β (Chapter 4)  Bidding function  μi  Weights assigned to different variables according to their relative importance in the bidding function  ξ (Chapter 4) ς  Cartesian distance between the robot and the task Number of successful culminations of the particular task by a robot The state vector representing the pose of the mobile robot in the  q (k )  global frame  q ( k + 1)  The robot global pose for a wheeled robot  θ (k )  Angle between the robot and the global x-axis the closest obstacle  ω L (k )  Angular velocity of the left wheel of robot  ωR (k )  Angular velocity of the right wheel of robot  φ  Phase difference  fm  Modulation frequency  φ  Empty set  OP = [ X C  T′  YC  θ ′]  Object pose relative to robot pose Homogeneous transformation matrix between the object coordinate system and the robot coordinate system  T ′′  The homogeneous transformation matrix between the robot coordinate system and global coordinate system  T  Homogeneous transformation matrix between the object coordinate system and the global coordinate system  O ′′  Pose of the object in the global coordinate system  xiv  Abbreviations MRS  Multi-Robot Systems  AIS  Artificial Immune System  CCD  Charged Couple Device  SQKF  Sequential Q-learning with Kalman Filtering  LED  Light Emitting Diode  IS  Immune System  Ag  Antigen  Ab  Antibody  Id  Idiotopes  TCP/IP  Transmission Control Protocol/Internet Protocol  xv  ACKNOWLEDGEMENTS  First and foremost I wish to express my gratitude and appreciation to my supervisor, Prof. Clarence W. de Silva, for his thorough guidance, continuous support, and extreme patience during the course of my Ph.D. research. Four years ago, when I hesitantly contacted him to seek a Ph.D. student position in his group, I was afraid that someone of his extraordinary achievements would not accept me. However, I was pleasantly surprised when he kindly accepted me as a graduate student. Apart from his fine academic mentoring, he has also given me the opportunity to develop my managerial and technical skills by appointing me as the Lab Manager of the Industrial Automation Laboratory, under his Directorship. In the first year of my PhD, when I was deciding on a research topic, I was going through different theses completed in our group and came across a dedication in which Dr. de Silva was mentioned as a “gift from heaven.” This expression seemed strange to me, and I could not make sense of it at the time. However, with the passage of time, I have realized that it was a completely appropriate statement. To me, he is a gift from heaven and a spiritual father. I wish to thank my research committee members, Dr. Farrokh Sassani and Dr. Mohamed Gadala, who carefully assessed my thesis and gave me thoughtful feedback. I also wish to express my appreciation to the members of the university examiners of my PhD defense, Dr. Vijay Bhargava and Dr. Mu Chiao, and the external examiner (anonymous). I wish to thank as well the NWFP University of Engineering & Technology, Peshawar, Pakistan, and the Higher Education Commission of the Government of Pakistan for awarding me a PhD fellowship. In addition, I am grateful for PhD tuition awards from the University of British Columbia and for other financial support including research assistantships and equipment funding from the Natural Sciences and Engineering Research Council of Canada, the Canada Foundation for Innovation, and British Columbia Knowledge Development Fund through Dr. de Silva as the principal investigator. xvi  I would like to thank Ms. Yuki Matsumara, the Graduate Secretary of the department, who finds a solution to all problems of the graduate student and is a very kind and helpful lady. I would also like to express my gratitude as well to Dr. Farrokh Sassani, the former Graduate Advisor of the department, who gave me the feeling that in addition to my supervisor, being a graduate student of the department, there was another person with whom I could share my problems and who would listen kindheartedly. I wish to take this opportunity to thank my friends and colleagues in our research group, especially Dr. Lalith Gamage, Dr. Saeed Behbahani, Dr. Ying Wang, Dr. Farbod Khoshnoud, Mr. Roland (Haoxiang) Lang, Mr. Mohammed Alrasheed, Ms. Madalina Wierzbicki, Mr. Gamini Siriwardana, Mr. Arunasiri Liyanage, Mr. Ramon Campos, Mr. Srinivas Raman, Mr. Behnam Razavi, and Mr. Edward (Yanjun) Wang. You guys have made my life at UBC pleasant and memorable. In addition, I want to thank the undergraduate students who worked for periods of time in the Industrial Automation laboratory, especially Mr. Toar Imanuel and Ms. Villia Ingriany. As a Lab Manager, I am greatly helped by the staff of the department, particularly Ms. Barb Murray and Mr. Perry Yubano. My sincere thanks go to them. Finally, I want to gratefully thank my family and the teachers for their strong support. Above all, I wish to express my heartfelt appreciation and gratitude to my father, mother, wife, brothers, and sisters and to all my teachers, ranging from those who taught me the alphabet up to the completion of my PhD thesis, for their continuous love, support, and encouragement. I dedicate this thesis to them.  xvii  Chapter 1 Introduction Designing a Multi Robot System (MRS) that can autonomously perform an assigned task is a significant challenge, with important practical applications. Researchers have undertaken this challenge in different domains; for example, multi-agent architectures, taxonomies of MRS, multi robot learning, communication, cooperation and coordination among robots, and so on. An MRS comprises a team of autonomous robots, working together to execute a desired task. It is a distributed and self-organizing system that tends to find the most desirable solution for the problem without external intervention. The robots in an MRS are typically intelligent to some extent and possess local views. They sense changes in the environment and other robots and take actions based on this information. The robots may cooperate, communicate, and compete with each other. MRS are applicable in a number of challenging practical tasks such as cleaning of hazardous material, exploring hostile and dangerous environments, search and rescue, and planetary exploration. There are also numerous advantages of MRS; for example, robot failure will not cripple the entire system, and as a result it is more robust and reliable. It may be cost effective to design several less expensive robots with different capabilities to work cooperatively rather than designing one complex robot having all required capabilities. Such a single complex robot will be less feasible from the maintenance and control points of view. Intuitively, it is appealing to use a group of simple and low cost robots with 1  simple control and decision making capabilities to carry out an elaborate task in a cooperative manner, instead of using a complex and costly single robot. In fact, more than one robot may be needed to carry out the task in the event when the task execution is beyond the limits of a single robot. Even when a single robot is able to carry out the required task, the deployment of a team of robots can improve the performance of the overall system. However, the use of multi-robots is also full of challenges and may make the system more intricate rather than simplifying it. Incorporation of an MRS may bring up some important issues that do not arise in single-robot systems; for example: how would the robots interact with each other? How would the robots respond to failure, complete or partial, of an individual robot in the team? How would the robots resolve conflicts? To resolve these issues, the present thesis proposes a control framework using an Artificial Immune System (AIS) for a cooperative multi-robot system. An AIS imitates the biological/human immune system. The main goal of an immune system is to eliminate the foreign elements that invade the body. This common goal is handled by the individual components of the immune system in a distributed manner. The immune system also performs complex tasks such as coordination, communication, and cooperation between the individual components, in order to eliminate the foreign invaders. Inspired by the natural immune system, in the present work a team of autonomous robots that make independent decisions are employed to coordinate, resolve conflicts, and if required cooperate with each other in the team to accomplish a common goal.  2  1.1 Goals of the Research In this thesis, a physical multi-robot cooperative system operating in a dynamic and unknown environment is developed. In order to complete this challenging task, several key approaches are established. The primary research goals of the thesis are to: •  Develop a fully distributed system framework for multi-robot cooperation that accommodates behavior coordination, resource assignment, and a dynamic and unknown environment  •  In this framework, develop an intelligent control methodology for a group of autonomous mobile robots incorporating an artificial immune system  •  Study the performance issues such as cooperative behavior, robustness, fault tolerance, speed, and accuracy of the developed methodology  •  Evaluate the performance of the developed system with respect to a benchmark  application,  using  analysis,  computer  simulation,  and  experimentation using physical robots •  Evaluate the performance of the developed approach by comparing it with other established methodology such as market (auction) based approach  1.2 Problem Definition A primary objective of the present work is to develop a multi-robot cooperative system that is fully autonomous and fault tolerant. The multi-robot system in this research consists of a team of heterogeneous mobile robots that communicate, coordinate, and cooperate with each other to achieve a specified global goal while resolving conflicts and accommodating full and partial failures in the robots. The system  3  will autonomously choose the appropriate number of required robots for the assigned task in an unknown, dynamic, and unpredictable environment. The feasibility of the scheme will be demonstrated by implementing the approach on a team of heterogeneous mobile robots that cooperatively transport multiple objects to a goal location. A schematic representation of the developed system is shown in Figure 1.1  Goal  Obstacle Object Obstacle Obstacle  Object  Object  Mobile Robot Mobile robot Obstacle  Mobile robot  TCP/IP TCP/IP  TCP/IP Wireless Network IEEE 802.11g  Wired/wireless Router  Serve  TCP/IP TCP/IP  Figure 1.1: A schematic representation of the developed system  In the task of object transportation, several autonomous robots move in a coordinated manner to transport each object to a desired location. This task is important for several reasons. It represents a task that needs the basic capabilities of a multi-robot system, and hence can be used as a benchmark problem to develop a test bed for evaluating the related technologies. Furthermore, the object transportation has direct practical applications, as in shipping, storage, construction, inspection, and emergency and 4  security operations. In the transportation process, robots may need to detect and avoid both static and moving obstacles in the path, which may appear randomly. The robots in such a system need to communicate and cooperate with each other to determine their own optimal transportation strategies, and magnitudes and directions of the applied forces to carry out motion steps (displacement and speed) for completing the common goal. Charge Coupled Device (CCD) cameras, encoders, gyros, laser and motion sensors may be needed to detect the orientations and positions of the objects and the robots and also changes in the environment.  1.3 Design Requirements for a Cooperative Control Framework There are many challenges in the development of a distributed system framework for multi-robot cooperation. The primary goal in the design and development of the planned multi-robot architecture is to make the robotic team distributed, flexible, robust and fault tolerant. These issues are addressed now.  1.3.1 Flexibility The term flexibility refers to the ability of the robots in a team to modify their actions appropriately in response to changes in the environment or any entity in the system. In particular, the robot team should properly respond to robot failures. The cooperative teams should be flexible and adapt to the team composition. If a malfunctioned robot is taken out of the team and subsequently repaired or if a new robot is included in the team, the cooperative multi-robot team should adjust to the changes and accept the new or the repaired members as appropriate.  5  Typically the robots work in an environment that has a dynamic and random obstacle distribution. The obstacle may appear or disappear randomly. In addition, as there are multiple robots working in parallel within the same environment, while one robot makes a decision, some other robot may have moved, changing the environment. The dynamic environment makes it difficult for a robot to make a decision correctly because there may be hidden environmental states, which are unobservable for the robots. The robot team, therefore, should be flexible in its action selection, adapting to environmental changes that may eliminate the need for certain actions or activate additional actions that a new environmental state may require. Finally, the developed framework should be able to accommodate different tasks with ease. A great deal of design modification should not be required for utilizing the system framework for different tasks; e.g., cooperative multiple objects transportation, hazardous waste cleanup, human rescue, and so on.  1.3.2 Robustness and Fault Tolerance Robustness refers to the ability of a robotic team to gracefully degrade its performance in the presence of a malfunctioned teammate, and maximize the efficiency within the resources available to complete the task. Robots in a cooperative team must be able to detect and compensate for partial or full failure of a team member. Robustness and fault tolerance are required to build cooperative robotic teams in order to minimize their susceptibility to individual robot failure. To achieve robustness a multi-robot system must be sufficiently distributed, rather than being centralized employing one or two robots to coordinate the entire team. In a fully distributed system, robots rely on local knowledge. Such a system is typically fast 6  and flexible to change, but can produce suboptimal solutions. However, a distributed approach results in a more robust multi-robot team since the failure of one robot will not cripple the entire system. Failure can happen at any stage, any time, and in any form, and a robot may or may not be able to communicate its failure. Nevertheless, there must be some means to re-allocate to a healthy robot the task of the malfunctioned robot. The teammates should respond to individual robot failure that occurs during the mission in order to efficiently complete the task.  1.3.3 Local Sensing Capabilities of the Robots Local sensibility refers to the local view of a robot. The robots in the proposed framework have a local view only, and the global environment is generally unknown to them. There is no centralized knowledge base, which gives information about the environment and its entities. As the robot cannot know the complete environment in advance, in part due to the unfamiliarity and in part due to the dynamic nature of the environment, they cannot employ the traditional planning technologies for decision making associated with the task execution. In this research, it is assumed that each robot in the team can detect the desired object, obstacles, or any other entity in the environment, within a limited detection radius.  1.4 Related Work Multi-robot systems (MRS) are an important and rather active research area. MRS have been proposed in the last decade in a variety of settings and frameworks pursuing different research goals. A significant work has been done in multi-robot systems and decision making by the robotics community. Some pertinent work is mentioned below. 7  1.4.1 Multi-Robot Cooperation With recent developments in robot technologies, an emerging and active research area concerns multi-robot cooperation. In the last decade there has been a considerable growth in this area. This can be gauged by the large number of related publications in recent years. Mataric et al. (1995) developed a behavior-based architecture for a cooperative multirobot system with explicit communication. They used two commercially available six legged robots to push a box to a goal location. Simple communication protocols and reactive control strategies were used for cooperation between the robots. Miyata et al. (2002) proposed a centralized task assignment method for cooperative transport by multiple mobile robots in an unknown and static environment. A priority based assignment algorithm and motion planning approaches were employed. They validated their approach through simulation and experiments by transporting an object to a goal location using multiple robots. Huntsberger et al. (2003) presented a three layered control architecture called CAMPOUT for cooperative multiple mobile robots, to perform a tightly coordinated task. CAMPOUT was validated in a multi-robot cooperative transportation task in an autonomous tent deployment project on a planetary surface and in a cliff robot project. In their latest work Strop et al. (2005, 2006) presented a multi-robot construction task based on the CAMPOUT architecture. Wang et al. (2003) proposed a behavior based cooperative strategy for object handling. They considered object dynamics and force distribution during transportation.  8  The cooperative system comprised a managing robot and homogenous behavior based worker robots. They designed an extended subsumption architecture for autonomous moving and cooperative manipulation actions. Wang and de Silva (2005) developed a multi-agent architecture for cooperation between multiple robots to push a box to a goal location. They integrated reinforcement learning with a genetic algorithm to learn the cooperative strategy for achieving a common goal. They further extended their approach and proposed a modified distributed Q-learning algorithm, termed sequential Q-learning with Kalman filtering (SQKF) (in press). They showed that by arranging the robots to learn in a sequential manner and employing Kalman filtering to estimate the disturbance caused by other robots, the SQKF algorithm performed better than single agent Q-learning and team Q-learning. Matsumoto et al. (1990) and Asama et al. (1992) presented an autonomous and distributed robot system called ACTRESS, addressing the issues of communication, planning and task assignment among the robots. In their approach cooperation was not planned beforehand, rather employed when required. They demonstrated their approach by implementing it on a team of mobile robots performing a task of object pushing. Chaimowicz et al. (2002) presented a methodology for execution of a cooperative task by coordinating multiple robots. The methodology was based on dynamic role assignment in which the robots assume and exchange roles during cooperation. The roles were modeled based on the theory of hybrid automation. The methodology was demonstrated in object transportation by simulated mobile robots scattered in the environment. Kube and Bonabeau (2000) adopted an ant-based approach to the problem of cooperative object transportation by a group of robots. Inspired by the cooperative  9  transportation of a prey by a group of ants in an ant colony, a group of mobile robots were deployed to transport a box to a goal location. Trials were conducted with the boxes of different sizes and goal locations. Stigmergic communication was achieved through the environment by way of the object being manipulated.  1.4.2 Multi-Robot Coordination in Robot Soccer Teams The literature on robotic soccer is a rich source of information on coordination strategies. Vail and Veloso (2003) developed a framework for role assignment and coordination for a group of homogenous robots in the soccer domain. They used an artificial potential field to combine obstacle avoidance with coordination. In particular, they employed an auction based approach where a biding function was used to determine the suitability of a particular robot for a specific role. Kose et al. (2005) studied the problem of coordination among the members of a robotic soccer team. They employed a market driven approach to solve the problem and defined a cost function to assign roles to the robots. The market based approach was further extended by combining it with reinforcement learning in order to allow the teams to learn new strategies. Iochi et al. (2003) presented an approach for multi-robot coordination. They proposed two-layered coordination module, which comprised a communication layer and a coordination protocol. The communication layer was responsible for data exchange among the robots. The coordination protocol, which was based on the concept of utility function, performed negotiation among robots and assignment of roles to the robot in robot the soccer team. Spaan and Groen (2002) presented an approach for coordinating a team of soccer playing robots in the RoboCup middle-sized league. It was based on the idea of dynamically distributing roles among the team members, and added the notion of a global team 10  strategy. Utility functions were used for estimating the suitability of a robot for a role. The utility function was a measure of the expected time taken by a robot to reach the ball and the position of the robot with respect to the role.  1.4.3 Fault Tolerance and Multi-Robot Cooperation Robustness and fault tolerance are very critical for any multi-robot system, particularly when operating in a dynamic and unknown environment. Robot teams require some level of robustness to individual failures as malfunctioning of a robot will have an adverse effect on the performance of the entire system. Parker (1998a) proposed a behavior based software architecture called ALLIANCE which facilitated fault tolerant cooperative control of heterogeneous mobile robots performing a mission composed of loosely coupled sub tasks that may have ordering dependencies. The ALLIANCE incorporated mathematically modeled motivations within each robot to achieve adaptive action selection. The motivations were designed to allow the robotic team members to perform tasks only as long as they demonstrate their ability to have a desired effect on the world. The impatient motivation enabled a robot to handle situations when other robots fail in performing a given task. The acquiescence motivation enabled a robot to handle situations in which it itself fails to properly perform the task. Gerkey and Mataric (2002a, 2002b) presented an auction based fault tolerant dynamic task allocation method for a group of robots. In this method an auctioneer robot auctioned a task to the group of robots. A robot that won the task based on some metrics was awarded a time limited contract to execute the task. The time limited contract provided a built-in time out that could trigger fault detection and recovery. If the auctioneer robot failed to receive an acknowledgement after sending a renewal, it could 11  assume that the robot that previously executed the task had failed, and would reassign the task. Similarly, that task could be reassigned if the auctioneer found that insufficient progress had been made. In either case, the previous winner would terminate the task execution after its contract had expired without renewal. Christenson et al. (2009) presented a distributed approach that allowed the robots in a swarm robotic system to detect the failure among them. They adopted the firefly approach to detect the failure. Specifically, they presented a synchronizing flashing protocol as an exogenous fault detection tool. Every robot was equipped with a flashing light emitting diode (LED). The LED ceased to flash in the event of robot breakdown. The absence of flashing was a sign of robot failure which was detected by the operational robot.  1.4.4 Market-Based Multi-Robot Systems Researchers have recently applied the principles of economic market to multi-robot systems. In the market based multi-robot systems, robots are designed as self-interested agents that operate in a virtual economy. Both the tasks that must be completed and the available resources are commodities of measurable worth that can be traded. Gerkey and Mataric (2002a, 2002b) developed an auction based approach for multirobot coordination. They implemented an auction mechanism in the MURDOCH architecture using the first-price sealed bid approach for auctioning a task. They validated their approach using a task of cooperative multi-robot box pushing. In their experiment, two robots acted as pushers and the third performed a watcher role. The pusher could see the box and the watcher could see the goal. In addition, the watcher, while observing the goal, could accurately perceive the angular error of the box orientation with respect to the 12  path from the box to the goal. The aim was to rotate the box until the angular error was zero. In the pusher-watcher box transportation process, the watcher initially auctioned off the left and right push-box tasks with proper velocities and let the robots push until the box orientation changed sufficiently. At this point, the current contracts were allowed to expire and new ones were formed by auctioning the tasks again until the box was transported to the goal location. Zlot and Stenz (2006) developed a solution to multi-robot coordination for complex tasks that extended the market-based approaches by generalizing the task descriptions into task trees, which allowed tasks to be traded in a market setting at variable levels of abstraction. In order to incorporate these task structures into a market mechanism, efficient bidding and auction clearing algorithms were developed. As an example scenario, they focused on an area reconnaissance problem which required sensor coverage by a team of robots over a set of defined areas of interest. The advantages of explicitly modeling complex tasks during the allocation process were demonstrated by a comparison of their approach with other existing task allocation algorithms in this application domain. In simulation they compared the quality of solution and the computation times of these different approaches. Kalra et al. (2005) presented Hoplites, a market based coordination framework that explicitly planned tight coordination in multi-robot teams. Hoplites consisted of two coordination mechanisms: passive coordination quickly produced locally-developed solutions while active coordination produced complex team solutions via negotiation between teammates. Robots used the market to efficiently vet candidate solutions and  13  also used market indicators to choose the coordination mechanism that best met the current demands of the task.  1.4.5 Artificial Immune System Based Multi-Robot Systems Biological information systems present a rich source of ideas for problem solution in multi-robot cooperation. Humans are fascinated by nature, as evident from Edmund Wilson’s sociobiology on cooperative behavior of ants and social insects (Kube and Bonabeau, 2000), neural networks based on biological nervous systems, genetic algorithm and genetic programming based on biological evolution (Marley Lee, 2000), and fuzzy logic inspired by human knowledge and reasoning (Liu et al., 2005). Researchers have proposed various solutions with biological inspiration for multi-robot cooperation. More recently, artificial immune system (AIS) has emerged as a new soft computing paradigm. In the past decade, researchers have exploited the properties of the vertebrate immune system in order to devise solutions to the tribulations in the field of robotics. The principles of an immune system are increasingly finding applications in different research areas of robotics; for example, obstacle avoidance (Yen-Nien L, 2007), navigation (Whitbrook et al, 2007), and multi-robot cooperation (Gao and Wei, 2005). Gao and Wei (2005) proposed an algorithm for autonomous multi-robot cooperation based on a probabilistic immune agent model. The model constituted a number of immune agents (Ag). The environment was considered an antigen, and Ag as an antibody. Each Ag had the ability to identify an antigen, and the Ag declared a stimulus as defined by the stimulus function once it found an antigen within the sensory neighborhood. By activating and suppressing each other within the immune network, the 14  Ags selected one or more colleagues to solve the problem at hand. The model and the associated algorithm were applied to a multi-robot box pushing task. It was assumed that the robots knew nothing about the task at the start, and they were required to find and cooperatively push the dispersed boxes to a goal location. In the application, the boxes were considered as antigens and the robots as antibodies. The cooperation was not realized beforehand, and rather determined by the system if the help of another robot was required. Lau and Wong (2006) presented a multi-agent control framework based on an artificial immune system (AIS). Their distributed approach used the mechanism of biological immunity. A mathematically modeled control frame work was responsible for determining the behavior of the AIS agents in response to the changing environment. Once the AIS agents recognized the task, it rearranged or recombined their fundamental capabilities to tackle the task, and recruited another agent if help was required. Cooperation was not assumed a priori. If a robot was unable to complete the task alone, it sought help of an appropriate robot. To achieve this, coordination among robots was required before and during cooperation. Simulation studies were conducted to demonstrate the effectiveness of the proposed control framework. In the simulation, a group of AIS agents were deployed in a warehouse. Tasks assigned in the warehouse were predefined with different complexity levels like counting, and building up goods in the warehouse. Gao and Luo (2008) proposed an Artificial Immune Network (AIN) model for a multi-robot system. In the AIN model, the system was composed of many robots and tasks. The robots were simulated as B-cells, while the task as an antigen. Based on AIN,  15  static and dynamic task allocation algorithms were developed utilizing interaction among antibodies and integrating the cooperative idea into the antigen stimulus. They validated their approach by simulating autonomous emergency handling. Alarms were generated during the experiments, whose difficulty levels were unknown. The task of the robot team was to dynamically allocate robots to correct the problem indicated by the alarms. As the robots had no prior knowledge of the difficulty levels of the alarms and since some alarms needed more than one robot for corrective action, the autonomous cooperation was considered in the dynamic task allocation. Khan and de Silva (2008, 2009a, 2009b) proposed an immune based framework for multi-robot systems, which was fully autonomous and fault tolerant. A team of autonomous robots was employed that made independent decisions, coordinated, resolved conflicts, and if required cooperated with each other to accomplish a common goal. The communication and coordination strategies were derived from Jerne’s idiotypic network theory. Furthermore, the system autonomously decided on the number of robots required for a cooperative task. The capabilities and different variables of the robots were arranged in a chain like configuration as an antibody structure in a human immune system. The framework accommodated robot failures (full or partial), during the both processes of coordination and cooperation, and the robot team could respond to individual robot failures that might occur during the mission. A binding affinity function was defined to resolve possible conflicts among robots, and to ensure that the most suitable robot among the fleet goes for help to cooperate with another robot for accomplishing the task. The feasibility of the work was demonstrated by implementing the approach on a team of mobile robots that transported an object to a goal location.  16  1.4.6 Object Pose Estimation Pose estimation in a real time is a fundamental requirement in multi-robot cooperation for object transportation. Though there has been a substantial growth of research activity in the area of pose estimation of a robot, the pose estimation of objects has not received the much needed attention. Simon et al. (1994) demonstrated an approach for performing full 3-D pose estimation of arbitrarily shaped rigid objects at a speed up to 10 Hz. The approach utilized a high speed VLSI (very large-scale integration) range sensor capable of acquiring 32x32 cell range images in 1 millisecond or less. This was used to acquire data on the object. Pose estimation was then performed by fitting the data into a triangular mesh model using an enhanced implementation of the iterative closest point algorithm. Their experimental setup consisted of a high speed VLSI range sensor having two primary components: the sensor head and the light stripe generator. The object used in pose estimation was a small bust of the Greek goddess Venus. Lang et al. (2008) presented a multiple sensor-based method for robot localization and box pose estimation. A CCD camera mounted on a robot was used to find a box in its environment. A laser range finder mounted on the robot was activated to measure the distance between the laser source and the object. Finally a homogenous matrix transformation was applied to represent the global pose of the robot and the box. The approach was validated using the Microsoft Robotics Studio simulation environment. Ekvall et al. (2005) presented an approach for object recognition and pose estimation based on color cooccurrence histograms and geometric modeling. They addressed the problems of robust recognition of objects in natural scenes, estimation of partial pose 17  using an appearance based approach, and 6 DOF model based pose estimation and tracking using geometric models. The authors demonstrated that color cooccurrence histograms are computationally efficient for representing the appearance of an object in the context of object recognition and partial pose estimation. Kay and Lee (1991) used images from a stereo camera mounted on the end-effector of a robotic manipulator to estimate the pose of an object with respect to the base frame of the robot. The orientation of the object was estimated by using the least-squares method. A Kalman filter was used to smooth the estimation error. Park et al. (2006) presented a method for global localization of an indoor environment, which employed object recognition using a stereo camera. Their method of global localization first estimated a coarse pose and then a refined pose. The coarse pose was estimated by means of object recognition and least squares fitting through singular value decomposition, whereas the refined pose was estimated by using a particle filtering algorithm with omnidirectional metric information. Tomono (2005) presented a framework of building a 3-D environment model. He employed a mobile robot equipped with a laser range finder and a monocular camera. The information from the laser range finder and the camera were integrated to build a 3-D object map. There are several useful surveys which give insight into the multi-robot domain. Cao et al. (1997) and Arai et al. (2002) identified seven principal areas of multi-robot system. Dudek et al. (2002) proposed taxonomy of multi-robot systems. Parker (2000) studied the existing multi-robot architectures, and pointed out several challenges in the typical multirobot tasks. Stone and Veloso (2000) discussed the taxonomy for multi-agent systems from the machine learning perspective.  18  In a recent survey by Farinelli et al. (2004) discusses the taxonomy, tasks and domains of multi-robot systems. They classified multi-robot systems as unaware systems, aware but not coordinated systems, weakly coordinated systems, and strongly coordinated strongly and weakly centralized systems. Diaz et al. (2006) presented a critical survey of market-based multi-robot coordination. In their survey market based mechanisms were introduced followed by an extensive review of the approaches of market-based multi-robot coordination. The approaches were categorized and analyzed across several relevant dimensions: planning, solution quality, scalability, dynamic events and environments, and heterogeneity.  1.5 Contributions and Organization of the Thesis In the present thesis a novel control framework for multi-robot cooperation is developed which is able to operate in a robust and effective manner in a dynamic and unknown environment. There are five main contributions in the thesis: •  A fully distributed artificial immune system (AIS) based control framework is developed for heterogeneous multi-robot cooperation, which is fully autonomous and fault tolerant. The framework allows the multi-robot system to recruit robots when cooperation is required and replace any malfunctioned robots with healthy ones. This framework enables a multi-robot system to adapt to an unknown environment, promote cooperation among robots, and complete a desired task in robust and effective manner  •  New communication and coordination strategies among robots are developed based on Jerne’s idiotypic theory and modified Farmer’s computational model.  19  A methodology for dynamic task allocation and assignment depending on robot capabilities is developed using an artificial immune system •  A method for object pose estimation is developed with application in cooperative object transportation by mobile robots. The developed methodology is validated using physical experiments  •  The performance of the developed AIS-based control framework for multi-robot cooperation is evaluated by comparing it with the established market-based approach. For direct comparison, the two approaches are analyzed using the same simulation platform  •  The feasibility of the developed techniques is demonstrated by implementing the approaches on a physical team of mobile robots performing proof-of-concept object transportation experiments. A physical multi-robot transportation project is developed in laboratory. The multi-robot system operates in a dynamic and unknown real-world environment and shows robustness, effectiveness, flexibility and overall good performance.  The organization of the thesis is as follows. Chapter 1 presents the background information on multi-robot systems, sets the research objectives of the thesis, defines the research problem, and provides a literature survey to establish the existing work related to the subject. Chapter 2 gives a brief overview of biological and artificial immune systems and explains the model of idiotypic network theory. Chapter 3 develops a system framework for cooperative multi-robot systems based on an artificial immune system and a computational method based on the modified Farmer’s model of idiotypic network theory for simulating the stimulation and suppression phenomena. The developed system  20  framework is implemented on a practical project in multi-robot transportation and assessed through experimentation. The physical experiments and simulation results are analyzed to demonstrate the feasibility and the effectiveness of the developed approaches. Chapter 4 develops an algorithm for multi-robot cooperation using a marketbased approach and provides a comparative evaluation of the AIS- and auction-based approaches for multi-robot cooperation. Chapter 5 proposes and develops a method for object pose estimation for application in cooperative object transportation by mobile robots. Chapter 6 summarizes the primary contributions of the thesis and indicates some relevant issues for future research.  21  Chapter 2 Modeling the Immune System 2.1 Biological Immune System The immune system (IS) is a network of cells, tissues, and organs that work together to defend the body against pathogenic organisms, toxins, and other foreign molecules. These are primarily microbes—tiny infection-causing organisms such as bacteria, viruses, parasites, and fungi. The IS is also able to combat dysfunction of own cells in the body; for example, cancerous cells and tumors. The human body provides an ideal environment for many pathogens that try to break in. It is the immune system’s responsibility to keep them out or destroy them. The main function of the human IS is to recognize all types of cells within the body and categorize them as self and non-self. Having pattern recognition capabilities, the IS does not attack or destroy body’s own healthy cells which carry distinctive “self” marker. Anything that can trigger the immune response is called an antigen. An antigen can be non-self like pathogen or self like other immune cells, and the body’s own healthy cells. However, the reaction of the IS to non-self cells is different from that of the self cells.  2.1.1 Anatomy of the Immune System The fundamental constituents of the immune system (IS) which are positioned throughout the body are called lymphocytes—small white blood cells that are the key players in the IS. There are two main types of lymphocytes: B lymphocytes (or B-cells) 22  and T lymphocytes (or T-cells). The B lymphocytes are the cells that mature in the bone marrow and the T lymphocytes mature in the thymus region. In the context of the present research, only the B lymphocytes will be considered. The B-cells work by secreting the substances called antibodies in response to microbes like bacteria, viruses and tumor cells. Antibodies ambush antigens circulating in the blood stream. A B-cell is capable of producing one specific antibody that can recognize a particular type of antigen.  2.1.2 How the Immune System Works On the surface of a B-cell there is a Y shaped receptor called antibody, which is responsible for recognizing antigens. As shown in Figure 2.1, antibodies are proteins in the immune system which consist of two heavy chains and two light chains joined to form a “Y” shaped molecular structure. The light and the heavy chains are divided into two regions: the variable region and the constant region. The variable region consists of the upper part of a light or a heavy chain and serves as an antigen binding site that is antigenically distinct, called paratope. The constant region forms the rest of the structure in the light and heavy chains; it determines the mechanism that is required to destroy an antigen. As shown in Figure 2.1 the binding region on an antigen is called the epitope, and it determines the identity of the antigen. When an antigen invades the body, only a few of these immune cells can recognize the invader. Once an antigen intrudes the human body, the B-cells are stimulated, and an antibody whose paratope is complementary to the epitope will attach to the antigen to neutralize or eliminate it. However, as shown in Figure 2.2, if a single antibody is unable to destroy the antigen, it will coordinate with other antibodies to cooperatively eliminate the antigen. It follows that the human immune 23  system works cooperatively in a coordinated manner to achieve its task of eliminating antigens.  Figure 2.1: The structure of an antibody.  Figure 2.2: Cooperation of antibodies to eliminate an antigen.  24  The antibody receptor recognizes an antigen with certain affinity, and the binding between the paratope and the epitope will take place with strength proportional to this affinity. A paratope may or may not completely complement an epitope, resulting in a weak binding affinity. The greater the binding affinity, the higher the possibility of killing the antigen. The recognition of an antigen will stimulate the proliferation and differentiation of the immune cells that produce matching clones or antibodies. This process, called clonal expansion, generates a large population of antibody producing cells that are specific to the antigen. These clones get priority when exposed to similar antigens, which leads to rapid immune response. The process of amplifying only those lymphocytes that produce a useful antibody type is called clonal selection. The diversity of the immune system is maintained through the replacement of roughly five percent of the B-cells every day by new cells generated in the bone marrow. In addition to producing new cells, additional diversity is generated during the reproduction of the B-cells when they are stimulated on recognizing the epitope. The stimulated Bcells go through a high mutation rate (Farmer et al., 1986). Through mutation, weakly matching B-cells may produce antibodies with higher affinities for the stimulating antigen.  2.1.3 Idiotypic Network Theory N. K. Jerne formulated the idiotypic network theory (1974) which suggests that the immune system has a dynamic behavior, and the immune cells and molecules recognize each other even in the absence of external stimuli. The theory is based on the fact that in addition to paratopes (for epitope recognition), antibodies also possess a set of epitopes 25  and consequently are capable of being recognized by other antibodies even in the absence of antigens. The epitopes that are unique to an antibody type are termed idiotopes. Under the clonal selection theory all immune responses are triggered by the presence of antigens, but under the network theory the antibodies can be internally stimulated. Paratopes and epitopes are complimentary and are analogous to keys and locks. Paratopes can be viewed as master keys that may open a set of locks (epitopes), with some locks being able to be opened by more than one key (paratope), (Farmer et al., 1986). Figure 2.3 illustrates the immune network theory. It is seen that the antibody Ab2 on the B-cell recognizes the non-self antigen Ag, whereas the same Ab2 also recognizes the idiotope Id1 of Ab1 on B-cell 1. Thus Ab1 is said to be the internal image of Ag; more precisely, Id1 is the internal image of Ag. This means, Ab2 can recognize both Ag and Ab. When an antibody’s idiotope is recognized by the paratopes of other antibodies, it is suppressed. Conversely, when an antibody’s paratope recognizes the idiotopes of other antibodies or the epitopes of antigens, it is stimulated. The recognition of an antigen by a cell receptor results in network activation and cell proliferation. The network theory implies that B-cells are not isolated; rather communicate with each other via dynamic network interaction. The network is self-regulating and continuously adapts itself, maintaining a steady state that reflects the global results of interaction with the environment. The network theory also states that suppression must be overcome in order to elicit an immune response. In other words, the system is governed by suppressive forces, but opens to environment influences (Jerne, 1974). The suppression models the immune system’s  26  mechanism for removing useless antibodies (Vargas et al., 2003) and maintaining diversity.  Suppression  Idiotope Ab3 Id 2 Ab2  Id 1  Ag  Ab1  Activation  Figure 2.3: Jerne’s idiotypic network.  2.2 Artificial Immune System The establishment of the field of artificial immune system (AIS) has been slow and difficult for a number of reasons. First, the number of people active in the research area is still small, but has been increasing in the past few years. Secondly, the researchers have found it difficult to differentiate between AIS and theoretical immunology. Thirdly, the application domain of artificial immune systems is rather wide and extensive. Finally, a complete and comprehensive coverage of the field of AIS is not found in the published literature.  27  There were a limited number of attempts to define the field of artificial immune systems. The present thesis adopts the concept that an artificial immune system is a computational system that is inspired by theoretical immunology, which uses observed immune functions, principles and models to solve practical problems. This definition covers some of the aspects mentioned before, by drawing a fine line between AIS and theoretical immunology, with respect to the applicability. While work on theoretical immunology is usually aimed at modeling and providing a better understanding of the immune functioning and laboratory experiments, work on AIS is applied to solve problems in computing, engineering and other research areas as well. AIS is a novel soft computing paradigm (de Castro & Timmis, 2003). There exist several mathematical models to explain immunological phenomena (Farmer et al, 1986; Deaton et al., 1997). There are also computer models (DeBoer et al, 1992a; DeBoer et al., 1992b) that simulate various components of the immune system. These models include differential equation models (Farmer et al, 1986), stochastic equation models (Bersini and Calenbuhr, 1996), cellular automata models (Ballet et al., 1997; Dasgupta and Forrest, 1995), shape-space models (De Monvel and Martin, 1995), and so on. However, the natural immune system is also a significant source of inspiration for developing intelligent problem solving methodologies, but there has not been much research in this direction (Dasgupta, 1999).  The models based on immune system  principles have increasingly found applications in the field of engineering and science such as computer security, fault diagnosis, pattern recognition, robotics, and so on.  28  2.2.1 Modeling of the Idiotypic Immune Network Farmer et al. (1986) presented a method for modeling the idiotypic immune network in computer simulation. The network theory was modeled as a differential equation simulating the rate of change of concentration of an antibody with respect to stimulation, suppression, and the natural death rate. An antibody is represented as a pair of binary strings (p, e) with p denoting the paratope string and e the epitope string.  The epitope of an antibody is essentially an  idiotope as mentioned in the network theory. The degree of match between the binary strings of paratope and epitope mimics the binding affinity between a natural paratope and an epitope, and uses the logical XOR operator to test the bits of the string. Since p and e need not be exactly complementary in order to react with each other, an exact matching between the paratope and the epitope is not required. A binary string of a given length l representing the paratope and the epitope may react in more than one way; strings are also allowed to match in any possible alignment. However, a threshold value s has to be defined, below which the two antibodies will not react at all. For example if s is set to 5 and there are 5 matches for a given alignment, then the score will be 1, and if there are 6 matches the score will be 2, and so on. The strength of a possible reaction between the epitope and paratope is given as:  G = 1+ δ  (2.1)  where δ is the number of complementary bits in excess of the threshold value s. If matches occur at more than one alignment, the measure of strength for all possible alignments between antibody i and antibody j is given by: mij =  G  (2.2) 29  When antibodies interact with each other, the extent that one with a paratope proliferates and others are suppressed is governed by the degree of matching. Consider a system with N antibodies and n antigens  {x1 , x2 ,...., x N },  {y1 ,....., y n }  The differential equation governing the rate of change of concentration is given by: N n • N  xi = c  m ji xi x j − k1  mij xi x j + m ji xi y j  − k 2 xi j =1 j =1  j =1   (2.3)  where, N  m j =1  x xj  ji i  (2.4)  which represents the stimulation of the paratope on an antibody of type i by the epitope of an antibody of type j. Now N  m x x j =1  ij i  j  (2.5)  models the suppression of an antibody of type i when its epitope is recognized by a paratope of type j . Also n  m j =1  x yj  ji i  (2.6)  represents the stimulation of the antibody in response to all antigens. The final term  k 2 xi  (2.7)  models the tendency of the antibodies to die in the absence of any interaction, at a rate determined by k 2 .  30  In equation (2.3), c is a rate constant and constant k1 represents a possible inequality between stimulation and suppression. The essential element of this model is that the list of antigen and antibody types is dynamic, which changes as new types are added or removed. Thus n and N in equation (2.3) will change with the time.  2.2.2 Properties of the Artificial Immune System The AIS is based on the metaphors of the biological immune system. There exist mathematical and computer models that can explain the immunological phenomena and simulate various components of the biological immune system. It is known that the biological immune system is a significant source of inspiration for developing intelligent methodologies towards problem solving as in multi-robot cooperation. With reference to the biological immune system, the AIS has the following properties: •  Distributed System: As there is no centralized control, the components of the immune system interact locally to provide a global solution (protection), and hence the immune system is autonomous  •  Robustness: As there is no single point of control, the death of B-cells or antibodies will not cripple the overall system.  •  Adaptable: The immune system can learn to recognize and respond to new antigens and also retain the memory of those antigens in order to provide a better future response.  31  •  Dynamic network interaction: The B-cells in the immune system are not isolated but can communicate with each other via collective dynamic interaction.  •  Pattern recognition: The immune system can recognize and classify different patterns and generate different responses. Self-non-self discrimination is one of the main tasks the immune system accomplishes during the process of recognition.  •  Feature extraction: Antigen presentation cells interpret the antigenic context and extract the features by processing and presenting the antigenic peptides on its surface.  •  Diversity: The immune system uses combinatorics for generating a diverse set of lymphocyte receptors to ensure that at least some lymphocytes can attach to any given antigen.  Other related features like memory management, learning, self-tolerance, selforganization, and so on, also perform important functions in the immune response. All these remarkable information processing capabilities of the natural immune system provide several important analogies in the field of computation.  32  Chapter 3 Application of Artificial Immune System in Multi-Robot Cooperation 3.1 System Development In this thesis, a system framework based on an artificial immune system (AIS) is developed to satisfactorily address the challenges in a cooperative multi-robot system. Inspired by the biological immune system, in the present work, a team of autonomous robots make independent decisions, coordinate, resolve conflicts, and if required cooperate with each other to accomplish a common goal. The communication and coordination strategies are derived from Jerne’s idiotypic network theory (Jerne, 1974). The system autonomously decides the number of robots required for a particular task. The capabilities and different variables of the robots are arranged in a chain like configuration as an antibody structure in a human immune system. The developed framework accommodates robot failures (full or partial) during the process of coordination and cooperation. Also the robot team can respond to failures in individual robots that may occur during the mission, in order to complete the task efficiently. A binding affinity function is defined to resolve possible conflicts among the robots, and to assure that the most suitable robots among the fleet are chosen to cooperate in carrying out the required task.  33  3.1.1 Assumptions A main objective of the present thesis is to develop a cooperation strategy among multiple robots, based on AIS, for carrying out a desired task. In order to properly characterize the present work, the following assumptions are made on the approach: •  Robots are heterogeneous. They have different capabilities and sensors. They may or may not be able to complete the task independently.  •  A robot can fail fully or partially, and it may or may not be able to communicate its failure to the teammates. Any of the subsystems of the robot, such as sensors, communication devices, effectors, and even the entire robot may fail.  •  No robot has the global view of the system and the environment.  •  There is no controlling robot (leader) available to monitor the progress or state of other teammates or the environment. No central knowledge is available. The system is completely decentralized and distributed.  •  The environment is dynamic and unknown.  3.1.2 AIS and Multi-Robot Cooperation The structure of an antibody and the idiotypic network model form the theoretical foundation of the present research. In particular, a robot is analogous to an antibody, a robotic task is analogous to an antigen, and completing a task is analogous to eliminating the antigens. An antigen can be eliminated by antibodies, either alone or cooperatively. Furthermore, robotic capabilities are analogous to the paratope of an antibody, and an epitope represents the partial properties of the task that is to be completed. Lastly, binding affinity is a function which determines the most suitable robot in the fleet that is 34  capable of assisting in the specific task. The binding affinity function is further explained in section 3.2.3. In the sequel, the analogies mentioned above will be used. As mentioned in section 2.1.2 of Chapter 2, an antibody consists of two light chains and two heavy chains. The upper part of the light and heavy  chains forms a variable  region which serves as the antigen binding site called paratope. Note that a robot is analogous to an antibody, as shown in Figure 3.1.  Figure 3.1: Antibody (robot) light and heavy chains. The light chain L1 represents the state of the robot at a particular time. This state may be “explore,” “stimulate,” “failed,” and so on. The heavy chain H1 represents the capabilities of a robot such as “mobility,” “push,” “pick,” “arm,” “gripper,” “payload capabilities,” and so on. The light chain L2 contains the sensory data and the value of the success rate. The heavy chain H2 contains communication data. For example, since the robots in our laboratory system use TCP/IP communication, H2 contains the details of the IPs of other robots and the log of messages sent to any particular IP.  3.2 Overall Multi-Robot Cooperative System As shown in Figure 3.2, in the beginning, the antibodies explore the environment searching for the antigens (task). Once an antibody locates an antigen, it attempts to eliminate it (i.e., complete the task) by itself, provided that it has the capabilities. Otherwise it notifies other antibodies about the presence of the antigen and starts 35  searching for another antigen. If the antibody has the capabilities but still cannot handle the antigen alone, it coordinates with other antibodies to eliminate the antigen cooperatively. The process of interaction between an antibody and an antigen is shown in Figure 3.3. In particular, Figure 3.3(a) shows the epitope of the antigen, which represents the partially known capabilities required to eliminate it. Figure 3.3(b) represents the light chain L1 and the heavy chain H1, which form the paratope of the antibody: the state and the capabilities of the robot. When an antibody finds an antigen, it changes the status of the light chain L1 to “busy,” and compares the rest of the paratope, which is a heavy chain H1 with the epitope of an antigen. This comparison is done in all alignments to make sure that an antibody meets the required capabilities to handle the antigen. If H1 of the paratope matches the epitope, the antibody tries to eliminate the antigen alone. If it is unable to do so, the antibody coordinates with other antibodies for help, by broadcasting a help signal, for cooperatively eliminating the antigen.  36  Explore Notify Antibodies in explore state  Have partial capabilities?  Antigen found  No Coordination Before Elimination  Yes  No  Explore  Calculate and broadcast binding affinity  N0  Compare binding affinity with others  Have partial capabilities?  Have partial capabilities? No  Yes  Yes  Is binding affinity highest?  Calculate and broadcast binding affinity  Eliminate Alone (Transport)  Compare binding affinity with others  Yes Can Eliminate?  Yes  No Broadcast Help Signal to other Antibodies  Wait for Helping Antibody  Start Cooperative Elimination  Calculate path to Goal  Is binding affinity highest?  No  Yes Go for Help  Cooperative Elimination (Transportation)  Rotate Object Antigen  Eliminate Cooperatively (Transport Object to Goal)  Figure 3.2: Control framework of AIS-based multi-robot cooperation.  37  Epitope Mobility  Bumper  Push  (a) Epitope of object antigen Paratope L1  Busy  H1  Mobility  Payload Bumper capability  pick  push  camera  (b) paratope of antibody Figure 3.3: Epitope and paratope representation  3.2.1 Coordination Among Antibodies Before Elimination If an antibody is unable to eliminate an antigen alone, it will broadcast a help signal to seek the help of other available antibodies. The antibody which seeks help is termed the initiating antibody. The process of “coordination before elimination” is shown in the corresponding block of the flow chart in Figure 3.2. As mentioned in section 2.1.3 of Chapter 2, the idiotypic network theory presumes that apart from a paratope, an antibody also contains an idiotope, which is responsible for communication between antibodies. Based on this analogy, the initiating antibody broadcasts the epitope of the antigen as an idiotope of itself to other antibodies in order to communicate with them in seeking help. Every antibody in the communication range and having a light chain L1 of paratope in the “explore” state receives the idiotope of the initiating antibody. The antibodies that deal with other antigens or whose state is not “explore” will disregard the signal. In the next step, the antibodies will match the idiotope of the initiating antibody with the heavy chain H1 of their own paratopes. This matching is done in all alignments to make sure that the helping antibody has all the capabilities required to handle the antigen. If one or more antibodies completely match their H1 of paratope with the idiotope of the initiating 38  antibody, all these antibodies will be stimulated while the unmatched antibodies will go into the suppression state and start searching for another antigen. As there may be more than one capable antibody that can offer help, the most suitable antibody among them should be chosen. A binding affinity function is used to resolve this conflict. Once the matched antibodies calculate their binding affinities, they broadcast them to each other. All antibodies compare their own binding affinity value with the binding affinities received from other antibodies. If its own binding affinity is the highest, that antibody is stimulated and goes to help. Otherwise, the antibody will go into the suppression state and start searching for another antigen. The antibody that goes for help is termed the helping antibody.  3.2.2 Fault Tolerance The two major categories of possible malfunctions in multi-robot systems are partial failure and full failure. When a robot fails partially, it loses the ability to use some of its resources. There are two types of partial failure. In the first type, the robot is capable of detecting and communicating its failure to the teammates. In the second, the failed robot is unable to detect and communicate the failure to the other robots. The first type of partial failure is relatively easy to handle; once it communicates its failure, the malfunctioned robot is replaced and the task is re-allocated to a healthy robot in the team. Clearly, it is more difficult for the system to detect the second type of partial failure in a robot. Sometimes this type of failure can deceive the other members of the team. For example, if a communication link of the failed robot is operational, it may communicate  39  or coordinate with other robots incorrectly, which may give the impression that the failed robot is in working condition. Robot malfunction can happen in any stage of multi-robot cooperation. In this thesis, the issue of fault tolerance is addressed in some rigor, encompassing both full and partial failures. In particular, the failure of the initiating or the helping antibody (robot) during selection of the most suitable antibody and failure during execution of the cooperative task are investigated in the thesis. Once the helping antibody is selected, it sends a signal to the initiating antibody indicating its departure toward the antigen for cooperation. It will then communicate its own location from time to time. However, in case of full failure, if the helping antibody dies before it reaches the antigen, the initiating antibody will come to know of the problem as it will not receive update signals from the helping antibody. After some specific time it will broadcast a help signal again to seek help. In case of partial failure of the helping antibody before it reaches the antigen, if the helping antibody is able to communicate its failure to the initiating antibody, it will broadcast the help signal again. However, if the antibody is unable to communicate its failure, it is the responsibility of the initiating antibody to realize the failure of the helping antibody. As the initiating antibody should receive the location coordinates from the helping antibody from time to time, if they are not received as expected, the initiating antibody will assume that helping antibody has failed. After comparing several readings received from the helping antibody, the initiating antibody waits for a specific period of time, before declaring failure. In case of full failure, if the initiating antibody does not receive a signal from the helping body, after a specific interval of time, the helping antibody is declared as failed, and the  40  initiating antibody broadcasts a new help signal. In case of failure, whether partial or full, the initiating antibody removes the identity of the failed antibody from its inventory located in the heavy chain H2. The initiating antibody may fail as well after broadcasting the help signal. Once the helping antibody reaches the antigen, its sends a “synchronize” signal to the initiating antibody, in order to start cooperation. If the helping antibody does not hear back from the initiating antibody after a certain interval of time, it is declared as failed and the helping antibody becomes the initiating antibody, which will rebroadcast the help signal, seeking help. However, if the initiating antibody fails partially, the same procedure adopted for the helping antibody, as indicated before, will be used.  3.2.3 Binding Affinity Function Binding affinity ( β ) is the degree of binding of an antibody paratope with an antigen epitope or idiotope. In the present work, the binding affinity determines the most suitable and capable antibody in the fleet, for task assignment. Suppose that there are m antibodies: ={ , ,  ,. . . . . ,  }  and n antigens:  T = {a1 , a2 , a3 ,. . . . . an } The binding affinity is a function of the light chain L2 as given by  β = f ( L 2 p , d pa )  (3.1)  Here L 2 represents the light chain of the matched antibody having its paratope matched to the idiotope of the initiating antibody. As the help signal of the initiating antibody also 41  contains the location of the antigen that is to be eliminated cooperatively, d pa represents the Euclidean distance between the matched antibody and the antigen (the task location). The data for L 2 and d pa come from the antibody sensors and simple mathematics:  L 2 = −ormn − obmn + sr + v  (3.2)  where ormn is the orientation of the antibody, given by the value of angle θ between the antibody heading and the antigen; and obmn are the obstacles in the path between the antibody and the antigen, provided that they are in the detection range. The concept can be explained through Figure 3.4.  Figure 3.4: Orientation and obstacle detection within the detection radius. Since the antibodies have eight sonar sensors scanning a range of 180 0 , it is divided into eight different segments, each representing the scanning area of a sonar. If there is an obstacle within the sensory range, the corresponding segment is represented by binary 1; otherwise it is 0. Also, sr is the number of successful eliminations of the particular antigen by an antibody. When an antibody eliminates the particular antigen, the success 42  rate value is incremented by one. Analogous to a human immune system, the robot will stand a better chance again to eliminate the antigen as it has already proved its capabilities with respect to the specific antigen. v is the velocity of the antibody. Not necessarily the antibody that is closest to the antigen can reach there first; it will also depend on the velocity of the antibody. It is quite possible that an antibody that is far from an antigen but has a higher velocity will reach the object antigen sooner. We can write  d pa = dsmn =  (xm − xn )2 − ( ym − yn )2  (3.3)  By combining equation (3.2) and (3.3), (3.1) can be written as:  β mn = w1 (− ormn ) + w2 (− obmn ) + w3 (sr ) + w4 (v ) + w5 (dsmn )  −1  (3.4)  where w1 , w2 , w3 , w4 , w5 are the weights associated with the respective variables. Each matched antibody calculates its binding affinity using equation (3.4). After comparing the binding affinities, the antibody having the highest binding affinity is stimulated, which offers help to the initiating antibody, and the rest go into suppression and start searching for another antigen.  3.2.4 Cooperation Between Antibodies to Eliminate an Antigen Once the most suitable antibody is chosen based on the binding affinity, it approaches the antigen to eliminate in cooperation with the initiating antibody. For example, in a task of object transportation, once the helping antibody arrives at the help location, the antibodies calculate the path to the goal location. In the next step, the antibodies start eliminating antigens (i.e., transporting the object to the goal location). During 43  cooperation, the antibodies communicate with each other only if required; for example, they will communicate when an antigen (object) needs to be rotated or if an obstacle is to be avoided. During cooperation, the antibodies can also fail partially or fully. As the antibodies (robots) are equipped with various sensors, in most of the cases they will be able to detect malfunction; for example, if a motor fails, the optical encoder reading will reveal this failure, which may be communicated to another antibody. To show another type of partial failure, suppose that the antibody can communicate its failure to its ally antibody, unlike the partial failure mentioned in section 3.2.2, where an antibody cannot communicate its failure. In case of full failure, it is up to the cooperating antibody to judge the failure of the collaborating antibody. There are several ways to judge the failure. If an antibody communicates with another in order to rotate the object antigen or to avoid an obstacle, and another antibody does not give the required response, it assumed that the other antibody has failed. During a cooperative task, if the antigen is not eliminated as required; for example, in case of object antigen transportation if the object antigen stops moving towards the goal location, the antibody declares the collaborating antibody as failed or inefficient and broadcasts a help signal to call another antibody for assistance.  3.3 Simulation Study The feasibility of the system developed thus far is demonstrated now by implementing the approach on a team of mobile robots that transport an object to a goal location. The human body contains different types of antigens, which require different responses. Analogously, the environment in the present simulation study consists of two 44  different types of antigens: obstacles and the object to be transported. The antibody (robot) will use different tactics to deal with these antigens; specifically, avoiding the obstacle antigens, and transporting the object antigen. In an immune system, one antibody perceives another antibody as an antigen called a self antigen, but being intelligent, it does not eliminate it. Likewise in a multi-robot environment one robot is a self antigen to another robot, and is treated as a moving obstacle antigen, which is to be avoided. In this manner, the environment contains both moving and stationary obstacle antigens, which should be avoided by the antibodies. The simulation environment shown in Figure 3.5 consists of three black antibodies having different capabilities, red antigens representing static obstacles in the environment, one brown object antigen that is to be eliminated (i.e., transported to a goal location), and a yellow region representing the goal location.  45  Yellow (Goal Location)  Red (Obstacle)  Black (Antibody)  Brown (a)  (b)  (c) (d) Figure 3.5: Simulation environment: (a) Elimination (transportation) of the object antigen by one antibody; (b) Rotation of object antigen by one antibody; (c) Elimination of object antigen by two antibodies; (d) Rotation of object antigen by two antibodies. Figure 3.5(a) shows the elimination of the object antigen by a single antibody. In this case, if required, the antibody rotates the object antigen by pushing on the right end or the left end of the object antigen, as shown in Figure 3.5(b). Figure 3.5(c) shows the elimination of the object antigen by two antibodies. In this case, it is assumed that two antibodies are needed to rotate the object antigen, if required. As shown in Figure 3.5(d), the antibodies push the object antigen on the opposite sides at the opposite ends. 46  The present simulation study addresses two aspects. First, the effect on the system performance on the number of antibodies is investigated. Second, the effect of antibody failure on the system performance will be addressed.  3.3.1 Effect of the Number of Antibodies The efficiency of the object transportation system varies when the number of antibodies is increased. Figures 3.6-3.9 present typical results obtained from the simulation study. In all the cases shown here, the simulations are run 1000 times for a particular number of antibodies, and the average messages and the average number of time steps are computed (see the label on the Y-axis of the figures). To study the communication burden, simulations were conducted by increasing the number of robots for the same task. Specifically, the number of antibodies was increased from 3 to 30 and the communication burden on the system was observed. As shown in Figure 3.6, when the number of antibodies in a team was increased from 3 to 6, the number of communicated messages remained below 100. Beyond that, however, the number of messages increased exponentially. These messages include the help message sent by the initiating robot to all available antibodies, the messages sent by all capable antibodies to each other to determine the antibody with the highest binding affinity, and the messages sent by a helping antibody from time to time to the initiating antibody. Figure 3.7 shows the effect of the number of antibodies on the time (steps) taken for coordination. It is clear from the figure that a team comprising 25 to 30 antibodies takes a fever number of time steps to reach the goal. This shows that the number of time steps is inversely proportional to the number of antibodies. With a larger number of antibodies, there will be more chances for them to be located closer to the task. 47  Average Messages Before Transportation  1000 900 800 700 600 500 400 300 200 100 0  0  5  10  15  20  25  30  Number of Robots  Figure 3.6: Effect of the number of antibodies (robots) on the communication burden during coordination to determine a suitable robot for task cooperation. 150  Average Timestep Before Transportation  140 130 120 110 100 90 80 70 60 50  0  5  10  15  20  25  30  Number of Robots  Figure 3.7: Effect of the number of antibodies (robots) on the time during coordination to determine a suitable robot that can cooperate with the initiating antibody. Figures 3.8-3.9 show the effect of the number of antibodies on the object antigen that is being eliminated; i.e., transportation of the object to the goal location. The antibodies which cooperatively transport the object antigen will communicate only when needed. For example, they will communicate when the object antigen needs rotation, when  48  obstacle avoidance is required, or when they have to synchronize their activities as when pushing an object antigen simultaneously. Figure 3.8 shows the communication burden attributable to the number of antibodies. In the present example, the number of messages increases exponentially with the number of antibodies. This is true since the antibodies in the “explore” state which wander around also act as obstacles to the antibodies that cooperatively transport the object antigen. When a wandering antigen comes in the path of the object antigen which is being eliminated (transported), the antibodies cooperate to achieve the task, communicating with each other in order to avoid an obstacle (a wandering antibody). The wandering antibodies behave as dynamic obstacles. As their movements are unpredictable, the cooperating antibodies need increased communication to avoid them, which will increase the communication burden. Same reason holds for the increased number of time steps due to an increase in the number of antibodies. With more wandering antibodies, more time is required to avoid them. It follows that, increasing the number of antibodies before elimination is helpful as there will be greater chances for the antibodies to find an object antigen in the environment, and help if cooperation is required. However, antibodies behave like dynamic obstacles to the cooperating antibodies which eliminate the object antigen. This will affect the efficiency of the elimination process.  49  Average Messages During Transportation  500 450 400 350 300 250 200 150 100  0  5  10  15  20  25  30  Number of Robots  Figure 3.8: Effect of the number of antibodies (robots) on the communication burden during elimination of an object antigen (transportation to a goal location).  Average Timestep During Transportation  500 450 400 350 300 250 200 150 100  0  5  10  15  20  25  30  Number of Robots  Figure 3.9: Effect of the number of antibodies (robots) on the time (steps) during elimination of an object antigen (transportation to a goal location).  3.3.2 Effect of Robot Failure on the System Figures 3.10-3.15 present the results of robot failures and their effect on the system performance. Both partial and full failure are studied. The simulation environment consists of three heterogeneous antibodies, obstacles, and an object antigen.  50  Figures 3.10-3.13 show the results of partial, full, and no failure in the helping and initiating antibodies during coordination before transportation. In the case of partial failure, it is assumed that the helping antibody cannot communicate its failure to the initiating antibody, and vice versa. Figures 3.10 and 3.11 show the results when the helping antibody fails before it reaches the object antigen that is to be eliminated (transported) cooperatively. Figure 3.10 shows the results of the communication burden on the system due to partial and full failure of the helping antibody. Here, the communication messages include the broadcast message sent by the initiating antibody to seek help, the messages sent between the other two antibodies in order to find out the antibody having the highest binding affinity, and the messages sent by the helping antibody to the initiating robot before and after failure. In case of partial failure, it is assumed that the antibody is failed but its communication link is still able to send messages to the initiating antibody. This makes it difficult for the initiating antibody to realize the failure of the helping antibody. As shown in the figure, without any failure, the number of messages is at a minimum. However, in case of partial failure, the messages are greater, which is due to the fact that the helping antibody is not moving towards antigen, but still sending messages to the initiating antibody. In case of full failure, as the helping antibody does not send any messages to the initiating antibody after it fails, the number of messages is less compared to the case of partial failure. This is also a reason for the increased number of time steps in the case of partial failure, compared to the cases of no failure or full failure. The initiating antibody has to wait for a sufficient amount of time before declaring that the antibody has failed. The time comparison is shown in Figure 3.11.  51  No Failure, Messages Mean =63 Partial Failure, Messages Mean =236 Full Failure, Messages Mean =141  600  Messages Before Transportation  500  400  300  200  100  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.10: Communication burden due to partial or full failure of helping antibody that approaches the object antigen to cooperate with the initiating antibody.  No Failure, Timesteps Mean =147 Partial Failure, Timesteps Mean =629 Full Failure, Timesteps Mean =244  3000  Timesteps Before Transportation  2500  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.11: Effect on the time (step) due to partial or full failure of the helping robot that approaches the object antigen in order to cooperate with the initiating antibody.  Figures 3.12 and 3.13 represent the failure of the initiating antibody. It is pertinent to mention here that there is one way communication between the initiating antibody and the helping antibody. The helping antibody sends messages from time to time until it 52  reaches the destination, whereas the initiating antibody receives those messages in order to be sure that help is on the way. During the movement of the helping antibody towards antigen, it does not know anything about the health of the initiating antibody. Once the helping antibody reaches the destination, it sends a synchronization signal to the initiating antibody in order to start the elimination process. In case of partial failure of the initiating antibody, as the communication link still works, the initiating antibody also replies to the helping antibody. However, due to its failure, the helping antibody is unable to eliminate the antigen alone. The helping antibody keeps sending a synchronization signal, but on not receiving the required response after a sufficient time has elapsed, the helping antibody either assumes that the initiating antibody has failed or that two antibodies are not sufficient to eliminate the antigen. In either case, the helping antibody becomes the initiating antibody, and broadcasts a help signal to seek help of another antibody. The synchronization messages between the helping and the partially-failed initiating antibody place an unnecessary burden of communication, which is also evident from Figure 3.12. In case of full failure, the initiating antibody does not respond to the synchronization signal of the helping antibody. On not receiving a response, after a sufficient period of time has elapsed, the helping antibody becomes the initiating antibody, in order to seek help of another antibody. Eventually the communication burden on the system will decrease. It takes more time to determine a partial failure, as the initiating antibody keeps on replying to the synchronization signals of the initiating antibody. This wastes time and furthermore, the helping antibody has to wait for a sufficient period of time to ensure that the failure has occurred. This will result in more time steps in comparison to no failure and full failure, as evident from Figure 3.13.  53  700 No Failure, Messages Mean =63 Partial Failure, Messages Mean =236 Full Failure, Messages Mean =130  Messages Before Transportation  600  500  400  300  200  100  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.12: Communication burden due to partial or full failure of the initiating antibody.  Timesteps Before Transportation  2500 No Failure, Timesteps Mean =147 Partial Failure, Timesteps Mean =870 Full Failure, Timesteps Mean =389  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.13: Effect on the time (step) due to partial or full failure of the initiating antibody.  Figures 3.14 and 3.15 show antibody failure during elimination (transportation) of the object antigen. To study another type of partial failure, in the present case it is assumed that the antibody can communicate its failure to its ally, when failed partially. However, in case of full failure, the cooperating antibody has to realize by itself the failure of another antibody. 54  As shown in Figure 3.14, in case of no failure, the cooperating antibodies have sent 127 messages to each other while eliminate the antigen. These messages comprise the messages sent to each other when synchronizing the pushing action, rotating the object antigen, and avoiding obstacles during elimination. The spikes seen in the figure, in the case of no failure, are due to the fact that one robot wanders in the world (while two antibodies eliminate the antigen), which causes interference from time to time. A wandering antibody that comes in the transportation path is perceived by the cooperative antibodies as a dynamic obstacle. In the case of partial failure, the messages are fever than in full failure due to the fact that an antibody can communicate its failure to the ally antibody, and consequently, it will rebroadcast the help signal to seek help. Moreover, as indicated in Figure 3.15, in the case of full failure the antibodies take more time (steps) to eliminate the antigen. This is true because the functional cooperating antibody waits for a specified time period to ensure the failure before declaring that the ally antibody has failed.  No Failure, Messages Mean =127 Partial Failure, Messages Mean =124 Full Failure, Messages Mean =695  800  Messages During Transportation  700  600  500  400  300  200  100  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.14: Communication burden due to partial or full failure of a cooperating antibody during elimination of an object antigen. 55  No Failure, Timesteps Mean =127 Partial Failure, Timesteps Mean =350 Full Failure, Timesteps Mean =919  Timesteps During Transportation  2500  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.15: Effect on time (steps) due to partial or full failure of a cooperating antibody during elimination of an object antigen.  3.4 Physical Experiments A physical multi-robot transportation system has been developed in the Industrial Automation Laboratory at the University of British Columbia. This is a decentralized and distributed system consisting of several mobile robots having local sensing capability. The system framework developed in the present research comprises autonomous robots, which are employed to transport an object to its goal location. Since the robots do not have knowledge about the physical parameters of the object, a robot at first will attempt to transport it by itself provided that it has the capabilities. If not, it will communicate and coordinate with other robots to get the assistance of the most suitable and capable robot. An experimental platform developed by us for this purpose is shown in Figure 3.16. It consists of mobile robots, obstacles, and the object which is to be transported to the goal location.  56  Color Blob  Object  Robot P3-AT  Obstacle Robots P3-DX Figure 3.16: Experimental platform for multi-robot cooperation. In this project, three mobile robots manufactured by MobileRobots, Inc. are used. Two of them are Pioneer P3-DX. The remaining one is a Pioneer 3-AT. A color blob is placed on the object to differentiate it from the obstacles. It also represents the partial capabilities required for the object to be transported. Anything detected without the color blob will be regarded as an obstacle in the environment. An object to be transported and the obstacles are randomly placed on the floor. Since the environment is unknown to the robots, they have to search and estimate the pose of an object while avoiding obstacles in the environment, using sensory data from the sonar, the laser range finder, and the CCD camera mounted on the robots. An approach for object pose estimation has been developed by us to carry out this task, as explained in Chapter 5. A global coordinate system is employed in the present implementation. In the beginning of an experiment, each robot is given its initial position and orientation in the global coordinate system. The robots will calculate and update their position and  57  orientation information based on the data recorded from the encoders mounted on the wheels and compass sensors, while exploring or transporting the object. Though the experimental system relies on the theory explained in section 3.1.2 and 3.2, a policy that is different from the simulation system is employed in the developed experimental system. In the simulation system, the antibodies avoid obstacles while eliminating an antigen (transporting an object). This is not practical in the physical experiments, as the camera cannot see beyond the object. The object is rather large, which makes it even more difficult for the robots to move it in a coordinated manner so as to avoid an obstacle. In the simulation, for rotation of an object, it is assumed that two antibodies are required to push the object on the opposite sides at the opposite ends, as shown in Figure 3.5(d). However, in the physical experiments, one robot pushes the object at one end while another robot remains stationary on the other end of the same side. There is no pose estimation in simulation, whereas in the physical experiment the robot estimates the pose of the object from time to time in order to find its orientation with respect to the goal location, until it reaches the goal location. The experimental results with the physical multi-robot cooperative object transportation system in a real environment are presented in Figure 3.17.  58  (a)  (b)  (c)  (d)  (e)  (f)  (g) (h) Figure 3.17: Multi-robot cooperative object transportation in a real environment. In Figure 3.17(a), an object is placed on the floor and the robots will then search for it while avoiding obstacles. The three robots are assigned the initial positions and orientations in the global coordinate system before exploring the environment. During 59  exploration, it searches for the color coded object. Once the object with the color blob is identified by a robot, it manipulates its capabilities based on the partial capabilities required for the object to be transported. Apart from object identification, the color blob also informs the robot about the partial capabilities required for the task. This capability matching is based on the theory mentioned in section 3.2. If the robot has the capabilities, it will estimate the pose of the object to find the centre of the object for pushing. Otherwise it will broadcast the presence of an object to the other robots, and will go into the explore state. Figure 3.17(b) shows that a robot having the capabilities attempts to transport the object alone by pushing it at the center point, as determined using a technique developed in Chapter 5. However, if the robot is unable to transport the object alone, it broadcasts the help signal to the other robots. This robot is termed the initiating robot. The robots that are in the “explore” state receive the help signal and calculate their binding affinity using equation (3.4). The two robots send their binding affinity values to each other and compare them to determine the winner. The robot having a higher binding affinity will go to help, and is termed the helping robot. The robot with the lower binding affinity will go into the explore state. In Figure 3.17(c), the initiating robot moves to one end of the object and the helping robot moves to another end of the object. The location of the object is communicated to the helping robot by the initiating robot, which is embedded in the help signal. Next the two robots start pushing the object.  60  In Figure 3.17(d), fault accommodation is shown during coordination, after a robot is chosen to help the initiating robot. As the helping robot approaches the object to be transported, it fails before reaching the object. The initiating robot rebroadcasts the help signal, and the robot with the highest binding affinity will approach to move the object. Figure 3.17(e) shows the failure of a robot during transportation of the object. By sensing the failure of the robot, the other cooperating robot broadcasts a help signal in order to find another robot to replace the faulty robot. The robot having the highest binding affinity then approaches the object in order to cooperatively transport it to the goal location. As the failed robot already occupies the pushing location at the right end of the object, the approaching robot goes to the centre of the object, and cooperatively transports it for some distance and then goes to the right end of the object to transport it further to the goal location. This process is shown in Figures 3.17(f) and (g). Figure 3.17(h) shows the pose estimation of an object by one robot in order to determine the orientation of the object with respect to the goal location. Once it finds the orientation, the robot coordinates with another robot to rotate the object if required. A total of 10 trials of experiments have been completed, 5 each for the cases of no failure and with failure. In the case of no failure, the robots on average spent 379 seconds in completing the task. With failures, the robots spent 402 to 429 seconds depending on the type of failure, to complete the task. Partial failure was introduced by switching the motors through software, and full failure by terminating the program of the particular robot.  61  3.5 Multi-Object Transportation Jerne formulated the Idiotypic network theory (1974) for the immune system, which suggests that antibodies not only recognize antigens but also interact with each other. As an outcome of this mutual interaction of antibodies, a communication network arises resulting in a formal immune network. This network forms the dynamic chain of stimulation and suppression. Based on Jerne’s work, Farmer et al. (1986) developed a computational model of idiotypic network theory. There, a differential equation is used to model the suppressive and stimulating components of network theory. The framework developed in the present thesis relies on Jerne’s idiotypic network theory and Farmer’s computational model for communication and coordination strategies among robots. The capabilities and different variables of the robots are arranged in a chain like configuration as an antibody structure in a human immune system. The case of multi-object transportation is considered now.  3.6 Test Environment of Multi-Robot Cooperation The feasibility of the scheme developed in this project is now tested on a team of mobile robots transporting multiple objects to a goal location in an unknown and dynamic environment. The multi-robot test environment with control software for transportation of multiple objects is developed in Java (see Figure 3.18). It consists of six simulated mobile robots marked in Black and Green having different capabilities, and five objects to be transported to a goal location. The objects have different properties. Color Red represents a set of randomly distributed static obstacles in the environment.  62  However, robots also are dynamic obstacles to other robots. Yellow represents the goal location.  Green  Object  Object Red Black  Yellow Figure 3.18: The multi-robot multi-object simulation platform.  3.7 Immune System and Multi-Robot Cooperation Immune system is a highly distributed, decentralize, and cooperative system. If one antibody is unable to eliminate an antigen, it cooperates with other antibodies to accomplish the objective. The cooperative multi-robot task execution in the present work is analogous to the task of antigen elimination in a human immune system. Specifically, a robot is analogous to an antibody, robot’s capabilities are analogous to the antibody paratope, a task is analogous to an antigen, and the properties of the task are analogous to the antigen epitope. The environment in which the robots work contains both stationary and dynamic obstacles (Note: a robot is a dynamic obstacle to any other robot) which can be regarded as self antigens. As in the immune system, the self antigens are treated differently from the non-self antigens; the self antigens (obstacles) too are avoided. However, as a multi-robot environment is dynamic and unknown in general, for 63  simplicity, all constituents in the environment other than the tasks are treated as self antigens. Task completion by the robots is analogous to the antigen elimination by the antibodies.  3.7.1 Antibody and Robot An antibody in the immune system has a distinct structure which resembles the letter Y, containing light and heavy chains. Analogous to an antibody, a robot comprises a pair of light chains and a pair of heavy chains. As indicated in Figure 3.19, the light chain L1 represents the state of the antibody (robot) at a particular instance. The light chain L2 contains the velocity of an antibody, its orientation with respect to the antigen, obstacles within the sensory range, and the success rate. The heavy chain H1 represents the coordinates at a particular time. The contents of L2 and H1 are required in calculating the binding affinity. The heavy chain H2 represents the capabilities of an antibody  Figure 3.19: Light and heavy chains of an antibody (robot).  3.7.2 Idiotypic Network Model and Multi-Robot Cooperation In the present work, an antibody deals with one antigen (task) at a time. If the antibody cannot tackle the antigen alone, it will seek help and will coordinate with the helper. In applying the Farmer’s model to multi-robot cooperation, equation (2.1) has to  64  be evaluated in separate parts. The values of c and k1 in equation (2.1) are equal to 1 in the present work. Figure 3.20 presents an example to explain the idiotypic network-based cooperation. R Antigen  R3 R4  R2 B1 R1 B2 Figure 3.20: Antibody-antigen and antibody-antibody stimulation and suppression. Once the antibody R locates an antigen (object to be transported), it tries to eliminate the antigen alone. If unable to do so, R will seek help from other antibodies and will cooperate and coordinate with them to eliminate the antigen (transport the object). In the first step the antibody R matches its paratope (robot capabilities) with the antigen epitope (partially known properties of the task), as indicated in Figure 3.21. Paratope L1  L2  H1  Search  Velocity Obstacle Orientation Obstacle Success Rate  X,Y  H2  Camera  Push  Mobility  Push  Camera  Bumper  Payload  Mobility  Epitope  Figure 3.21: Antibody paratope and antigen epitope matching.  65  The matching is done in full alignment, and the threshold value for the match is equal to the number of properties in the epitope list. If a paratope matches with an epitope, the antibody is stimulated and it will start eliminating (transporting) the antigen. This encounter is indicated as  α = Aji xi y j  (3.5)  Equation (3.5) represents the stimulation of antibody xi in response to the lone antigen  y j and A ji is the matching function between the antibody and the antigen. If the antibody R is unable to eliminate the antigen alone, it seeks help from other antibodies. The help signal of antibody R contains its idiotope. An idiotope contains information about the antigen; for example, the antigen epitope and location. The help request is received by all antibodies that are in the communication range and having L1 in the explore state. As shown in Figure 3.20, the robots R1-R4 and B1-B2 receive the help request. All antibodies compare their paratope with the idiotope of robot R, as indicated in Figure 3.22. Paratope L1  L2  H1  H2  ......  ........  .........  ................  L1 .......  L2  Idiotope Push  Camera  Paratope  H1  ........... .........  Mobility  Camera  Mobility  Push  Bumper  Payload  Figure 3.22: Antibody-antibody idiotope and paratope matching. The comparison is done in all alignments, and the threshold is equal to the number of antigen epitope properties received in the idiotopes from antibody R. The antibodies B1 66  and B2, which do not have the required capabilities, disregard the help signal. The paratope of robots R1-R4 matches the idiotope of R, and are stimulated. The stimulation is given by N  δ =  S ji xi x j  (3.6)  i =1  Equation (3.6) represents the stimulation of the antibodies xi , namely R1 to R4, in response to the antibody x j , namely R. Here i=1 to N represent the stimulated robots, and  S ji is the matching function which represents the degree of recognition for stimulation. Unlike in a natural immune system, here the antibodies communicate with each other without coming into physical contact. In the present case antibodies are stimulated in response to one individual antibody, unlike in the idiotypic network theory where one antibody can be stimulated in response to many antibodies. Based on equation (3.6) and as shown in Figure 3.20, four antibodies are stimulated to help the antibody R. One stimulated antibody, the most suitable one among the four, must go to help and the rest should go into the suppression state and start searching for other tasks. In order to be selected for cooperation with the initiating antibody, each stimulated antibody calculates its binding affinity based on the values in the light chain L2 and the heavy chain H1. Every stimulated antibody broadcasts its binding affinity to rest of the stimulated antibodies and compares its own binding affinity with the received one, based on N  τ =  Pij xi x j  (3.7)  J =1  67  Equation (3.7) represents the suppression of the antibody xi in response to all other antibodies x j . Here J=1 to N represent the number of antibodies, and Pij is the matching function representing the degree of recognition for suppression. An antibody xi receives the value of the binding affinity function from the other stimulated antibodies x j . There are three such antibodies in the example shown in Figure 3.20. After comparison, the antibodies that have a lower binding affinity than the received one go into suppression. The status of their light chain L1 changes to “explore” and they start searching for a new task. The antibody having the highest binding affinity, termed helping antibody, is stimulated to help the initiating antibody R, which will cooperatively eliminate the antigen (transport the object). This process is extended in a similar manner if a third antibody is required, in case the two antibodies are unable to eliminate the antigen. The initiating or helping antibodies may fail fully or partially at any stage of coordination and cooperation. In the event of malfunction of the initiating or helping antibody, the other antibody must take corrective measures to replace the failed antibody with a healthy one. To do this the malfunctioned antibody must be first declared as failed by the collaborator antibody. Next, the selection process of another healthy antibody should be started, as explained earlier. The antibody is declared as failed based on  ξ = kxi  (3.8)  Here k is the stimulus rate at which the malfunctioned antibody xi is declared as failed. Note that k changes depending on the type of failure, full or partial. Equations (3.5) through (3.8) may be rewritten as follows: •  xi = [α + δ − τ ] − ξ 68  n N   =  A ji xi y j +  S ji xi x j −  Pij xi x j +  − kxi i =1 j =1    (3.9)  Equation (3.9) represents the modified Farmer’s model of multi-robot cooperation for multi-object transportation.  3.8 Simulation Study for Multi-Object Transportation The present simulation study addresses two aspects based on modified Farmer’s model of multi-robot cooperation for multi-object transportation. First, the effect on the system performance on the number of antibodies is investigated. Second, the effect of antibody failure on the system performance will be addressed.  3.8.1 Effect of the Number of Antibodies In this section, the effect of increasing the number of robots on the multi-robot system is analyzed. The main objective here is to study the suitability and effectiveness of the developed approach for different team sizes. Two important factors are considered to study the effectiveness of approach: the time (steps) taken to complete the task and the communication burden on the system, with respect to the number of antibodies. In the beginning the number of simulated mobile robots is kept at six. In the task, five objects having different properties are transported to a goal location. Figures 3.23-3.29 present typical results obtained from the simulation studies. In all the cases shown here, the simulations are run 1000 times for a specific number of antibodies, and the average messages and the average time (steps) are computed (see the label on the Y-axis of the figures).  69  To study the time (steps) taken during coordination in determining a suitable antibody for cooperation with the initiating antibody, simulations were conducted by increasing the number of robots for the same task. Figure 3.23 indicates the effect of the number of antibodies on the time (steps) taken for coordination. It is clear from the figure that when the number of antibodies is increased from 6 to 30, the time (steps) first decreases and then starts to increase. The decrease in time (steps) happens since, with more antibodies there are more chances for them to be located closer to the antigen, and hence to reach there sooner. However, since the antibodies also act as obstacles to the helping antibody, increasing the number of antibodies will increase the task time (steps). Figure 3.24 indicates how the time (steps) increases with the increase in the number of antibodies during the process of eliminating an antigen (i.e., transporting an object).  850  Average Timestep Before Transportation  800 750 700 650 600 550 500 450 400  5  10  15 20 Number of Robots  25  30  Figure 3.23: Effect of the number of antibodies (robots) on the time during coordination to determine a suitable robot that can cooperate with the initiating antibody.  70  900  Average Timesteps During Transportation  850 800 750 700 650 600 550 500  5  10  15 20 Number of Robots  25  30  Figure 3.24: Effect of the number of antibodies (robots) on the time (steps) during elimination of an antigen (transportation to a goal location).  When an antibody is unable to tackle the task alone, it coordinate with other antibodies for autonomous selection of a capable and most suitable antibody, based on the binding affinity (3.4) and modified Farmer’s model (3.9). As shown in Figure 3.25, when the number of antibodies in a team is increased, the average number of messages increases as well. These messages include the help message sent by the initiating antibody to all other available antibodies, the messages sent by all capable antibodies to each other to determine the antibody with the highest binding affinity, and the messages sent by the helping antibody from time to time to the initiating antibody until it reaches the antigen. Figure 3.26 shows the communication burden attributable to the number of antibodies during cooperative elimination of an antigen. In the present example, the number of messages increases with the number of antibodies. This is true since the antibodies in the “explore” state which wander around, also act as obstacles to the antibodies that are cooperatively eliminating an antigen. The wandering antibodies  71  behave as dynamic obstacles. As their movements are unpredictable, the cooperating antibodies need increased communication to avoid them, which will increase the  Average Messages for Coordination Before Transportation  communication burden. 1800 1600 1400 1200 1000 800 600 400 200  5  10  15 20 Number of Robots  25  30  Figure 3.25: Effect of the number of antibodies (robots) on the communication burden  Average Messages for Coordination During Transportation  during coordination to determine a suitable antibody for task cooperation.  500 480 460 440 420 400 380 360 340  5  10  15 20 Number of Robots  25  30  Figure 3.26: Effect of the number of antibodies (robots) on the communication burden.  72  When the number of robots increases, there will be a greater opportunity for the robots to be trapped at corners, surrounded by other robots, or be trapped between obstacles and other robots. As expected, the larger the numbers of robots the greater the likelihood to be trapped (see Figure 3.27). 12  Robot Entrapment  10  8  6  4  2  0  5  10  15 20 Number of Robots  25  30  Figure 3.27: Robot entrapment with the increase in the number of antibodies. As shown in Figures 3.28 and 3.29, the number of antibodies affects the average number of time steps and messages for completion of a full experiments; i.e., elimination of all antigens (transportation of all objects). It is clear from Figures 3.28 and 3.29 that a team comprising 7 to 13 antibodies provides superlative performance, by taking fever time steps and placing less communication burden on the system, to eliminate all the antigens. The same is evident from the results shown in Figures 3.23 to 3.27.  73  1900  1800  Average Timesteps  1700  1600  1500  1400  1300  1200  5  10  15 20 Number of Robots  25  30  Figure 3.28: Average time (steps) incurred to eliminate all antigens. 2200 2000  Average Total Messages  1800 1600 1400 1200 1000 800 600  5  10  15 20 Number of Robots  25  30  Figure 3.29: Average number of messages incurred to eliminate all antigens.  3.8.2 Fault Tolerance The issue of fault tolerance in both cooperative elimination and elimination by a single antibody is considered now. If the antibody does not behave as expected, after the stimulation level k (3.8) reaches the threshold value, the collaborator antibody is stimulated to declare the malfunctioning antibody as failed.  74  Figures 3.30-3.35 present the results of antibody failures, their effect on the system performance, and the robustness of the developed approach. These results are based on the scenario where more than one antibody is required to cooperatively eliminate an antigen. Both partial and full failures are studied here. The partial failure occurs when an antibody is capable of detecting its breakdown and can communicate that to the teammates. This is named here as “Partial Failure 1.” The partial failure where the antibody is unable to detect and communicate its failure to other robots is termed “Partial failure 2.” In partial failure 2, it is assumed that the communication link of the malfunctioned antibody is operational. Figures 3.30 and 3.31 show the results when the helping antibody fails before it reaches the antigen that is to be eliminated. Figure 3.30 shows the time (steps) incurred in different types of failures. The time (steps) taken in partial failure 1 is less since the malfunctioned antibody communicates its failure to the teammates. However, in partial failure 2 and in full failure, the time taken is more since the initiating antibody has to wait longer, until the stimulation level reaches the threshold value, before declaring that the helping antibody has failed.  75  4000 No Failure, Timesteps Mean =589 Partial Failure 1, Timesteps Mean =679 Partial Failure 2, Timesteps Mean =1685 Full Failure, Timesteps Mean =1387  Timesteps Before Transportation  3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.30: Effect on execution time (step) due to partial or full failure of helping antibody that approaches the antigen in to cooperate with initiating antibody.  Figure 3.31 shows the results of communication burden on the system due to partial and full failure of the helping antibody. Here, communication messages include the broadcast message sent by the initiating antibody to seek help, the messages sent between the other antibodies in order to find out the antibody that has the highest binding affinity, the messages sent by the helping antibody to the initiating robot before and after failure, and additional messages pertaining to selection of a healthy helping antibody once the present antibody is declared as failed. As shown in the figure, the communication burden increases with failure. However, the system manages to cope with all the failures at the expense of extra time (steps) and communication burden.  76  No Failure, Messages Mean =799 Partial Failure 1, Messages Mean =2081 Partial Failure 2, Messages Mean =4128 Full Failure, Messages Mean =2050  9000  Messages Before Transportation  8000 7000 6000 5000 4000 3000 2000 1000 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.31: Communication burden due to partial or full failure of helping antibody that approaches the antigen to cooperate with initiating antibody.  Figures 3.32 and 3.33 show the results of failure of the initiating antibody during coordination before elimination. Here, it is relevant to mention that there is only one way communication between the initiating antibody and the helping antibody. The helping antibody sends messages from time to time until it reaches the destination, and the initiating antibody receives those messages in order to be sure that help is on the way. During the movement of the helping antibody towards an antigen, it does not know anything about the health of the initiating antibody. Once it reaches the antigen, the helping antibody sends a “Synchronize” signal to the initiating antibody in order to start the elimination process. On not receiving a reply or the required response, the helping antibody assumes either the initiating antibody has failed or that two antibodies are not adequate to eliminate the antigen. In either case, the helping antibody becomes the initiating antibody and broadcasts a help signal again. Figure 3.32 shows the effect on time (steps) due to failure of the initiating antibody. It is evident from the figure that in  77  the absence of any failure the initiating and the helping antibodies spend the least amount of time (steps) to reach the antigen in order to start cooperative elimination. In case of partial failure 1, the time (steps) taken is less than in both partial failure 2 and full failure, since the initiating antibody can communicate its failure to the helping antibody. The time (steps) taken is the largest in the case of partial failure 2, which is true because the communication link of the malfunctioned antibody is operational and it can communicate with the helping antibody which will mislead the helping antibody. Due to this pretence, the helping antibody will take more time to reach the stimulation threshold level before declaring that the initiating antibody has failed. If the initiating antibody dies, the system is able to cope with such failure as well, by calling another antibody to replace the dead antibody. Figure 3.33 shows the communication burden on the system due to partial and full failure of the initiating antibody. Here, the communication messages include the broadcast message sent by the initiating antibody to seek help, the messages sent by other capable antibodies to each other in order to find the antibody having the highest binding affinity, the messages sent by the helping antibody to the initiating antibody from time to time until it reaches the antigen, and the antibody communication in order to choose another antibody to replace a failed initiating antibody. When there is no failure the messages are at a minimum. Due to partial failure 1, partial failure 2, or full failure, the number of messages increases as the helping antibody keeps on sending messages to the initiating antibody until the stimulation level reaches the threshold value. Once the initiating antibody is declared as failed, the helping antibody becomes the initiating antibody and it broadcasts a help signal again. The entire process of selecting a helping antibody will start again, which will place an extra communication burden on the system.  78  4500 No Failure, Timesteps Mean =589 Partial Failure 1, Timesteps Mean =947 Partial Failure 2, Timesteps Mean =2618 Full Failure, Timesteps Mean =1422  Timesteps Before Transportation  4000 3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.32: Effect on execution time (step) due to partial or full failure of initiating antibody. No Failure, Messages Mean =799 Partial Failure 1, Messages Mean =894 Partial Failure 2, Messages Mean =2957 Full Failure, Messages Mean =1217  7000  Messages Before Transportation  6000  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.33: Communication burden due to partial or full failure of initiating antibody. Figures 3.34 and 3.35 show antibody failure during cooperative elimination (transportation) of an antigen. As indicated in Figure 3.34, the antibodies take more time (steps) to eliminate an antigen due to the failure of an antibody during cooperative elimination. This is true because the ally antibody takes time to make sure that the other has failed. Moreover, a healthy antibody has to come to replace the malfunctioned antibody, which also consumes time. 79  No Failure, Timesteps Mean =474 Partial Failure 1, Timesteps Mean =819 Partial Failure 2, Timesteps Mean =3374 Full Failure, Timesteps Mean =2079  7000  Timesteps During Transportation  6000  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.34: Effect on execution time (steps) due to partial or full failure of a cooperating antibody during elimination of an antigen.  In Figure 3.35, when there is no failure, the cooperating antibodies have sent 348 messages to each other while eliminating the antigen. These messages comprise the messages sent to each other when synchronizing the pushing action, rotating the antigen, and avoiding obstacles during elimination. However, the number of messages increases due to failure for two reasons. First, the ally antibody keeps on sending messages to the malfunctioned antibody asking to make a required move, until the stimulus threshold level is reached. Second, the antibody has to start a new process of selecting a helping antibody to replace the malfunctioned one, which requires messaging between antibodies, as mentioned before. However, the results show that the system can cope with the failure of an antibody during cooperative elimination of an antigen, though it takes more time (steps) and places extra communication burden on the system.  80  No Failure, Messages Mean =348 Partial Failure 1, Messages Mean =374 Partial Failure 2, Messages Mean =3079 Full Failure, Messages Mean =1620  7000  Messages During Transportation  6000  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.35: Communication burden due to partial or full failure of a cooperating antibody during elimination of an antigen.  When an antibody in the explore state finds an antigen and is able to eliminate it alone, the failure can occur during the elimination process only. Then partial failure 2 and full failure are identical, as there is no ally antibody which can judge the failure. In this case, some other antibody in the explore state will find a stationary antigen and will start eliminating it. In the case of partial failure, the failed antibody can inform other antibodies that are in the explore state, about the antigen. The antibody is chosen based on its capability and binding affinity, will approach the antigen to replace the failed antibody and to eliminate the antigen. Figures 3.36 and 3.37 show the results for elimination of an antigen by a single antibody. As indicated in Figure 3.36, when there is no failure the antibody takes less time (steps) to eliminate the antigen. However, in full failure it takes more time (steps) in comparison to partial failure, as the malfunctioned antibody cannot communicate its failure to the teammates. Figure 3.37 shows the communication burden due to failure of 81  an antibody during the elimination process. It is evident from the figure that no messages are incurred due to full or no failure. This is true since a single antibody is used to eliminate the antigen and there are no cooperative elimination messages. The figure shows the messages due to partial failure only, since an antibody is able to communicate its failure to the teammates. The antibodies in the explore state communicate with each other to choose the most suitable antibody in order to replace the malfunctioned one. 7000 No Failure, Timesteps Mean =474 Partial Failure, Timesteps Mean =1032 Full Failure, Timesteps Mean =1776  Timesteps During Transportation  6000  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.36: Effect on execution time (steps) due to partial or full failure of an antibody during the elimination of an antigen by a single antibody. 300 No Failure, Messages Mean =0 Partial Failure, Messages Mean =68 Full Failure, Messages Mean =0  Messages During Transportation  250  200  150  100  50  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 3.37: Communication burden due to partial failure of an antibody during the elimination of an antigen by a single antibody. 82  The spikes and overshoots seen in the figures in the case of no failure are due to the fact that the antigens in the explore state wander in the world, causing interference from time to time during elimination of an antigen. Apart from dynamic obstacles, the static obstacles in the environments are also a source of overshoots in the figures. Moreover, due to failures, when the malfunctioned antibody is replaced by a healthy one, it involves extra messaging between the antibodies for choosing the most suitable antibody, and increased time (steps) taken by the healthy antibody to reach the task increases, as seen in the form of overshoots in the figures.  3.9 Summary In this project, autonomous fault tolerant multi-robot cooperation framework based on an artificial immune system (AIS) was developed. A computational method based on the modified Farmer’s model of idiotypic network theory for simulating stimulating and suppression phenomenon was developed. In the present approach, robot cooperation was not planned beforehand, but rather employed by the system when required. Both partial and full failures were introduced in the robots during different stages to demonstrate the robustness of the approach. The approach was first studied through simulation, and then implemented on a physical team of heterogeneous robots performing object transportation experiments in our laboratory. Transportation both single and multiple objects having different properties, was undertaken. The results showed that both the developed simulation and the physical system were able to successfully complete the desired task in an unknown environment with a dynamic and static obstacle distribution.  83  Chapter 4 Comparison with Market Based Multi-Robot Cooperation 4.1 Market Based Approaches In market-based approaches, the robots trade tasks and resources with one another to maximize the individual profit and simultaneously improve the efficiency of the team. Auctions are the most common methods used in market-based approaches. Robots trade tasks through auctions and negotiations to win the tasks that generate the greatest profit. In an auction, a task is offered by an auctioneer robot, and the participating robots submit the bids to the auctioneer robot in order to win the auctioned task. After receiving all the bids, the auctioneer decides to award the task to the highest bidder. The bid price may consist of the robot’s cost based on such metrics as capabilities and resources. A multi-robot system can be either fully centralized or distributed. In fully centralized approaches a single leader robot commands the entire team. Specifically, the leader robot gathers all the information and produces an optimal solution for the entire team. However, centralized approaches are prone to failure. The failure of the leader robot will cripple the entire system. Centralized approaches are suited for applications involving small teams and a static environment, and when global information is easily available. On the other hand, in fully distributed approaches, the robots have local views and will rely on local knowledge. Such approaches are fast and fault tolerant, but can produce suboptimal solutions. Market-based approaches lie in between the centralized and  84  distributed approaches. They can adapt to dynamic conditions to produce more centralized or distributed solutions (Diaz et al., 2006).  4.1.1 Auction The traditional auction process involves a series of bids or offers by potential purchasers until the final bid is accepted by the auctioneer. In multi-robot cooperation in particular, the auction proceeds with the following steps:   Announcement phase. In this phase, an agent acting as an auctioneer offers an item or task for bidding. In a multi-robot system, the auctioneer robot announces a message corresponding to the task to be auctioned. The message contains the details of the task such as the task location and the required capabilities.    Cost estimation. The robots in an idle or explore state accept the message. Since the team comprises heterogeneous robots having different capabilities, the robots compare their own capabilities with the capabilities required for the task specified in the message. Only the capable robots will estimate the cost of the task, based on the biding function, which is explained in Section 4.4.    Bid submission. After evaluating the cost, the participating robots submit it to the auctioneer as a sealed bid.    Winner determination phase. The auctioneer carefully evaluates all the bids. Based on the evaluation, the auctioneer announces the winner, awards the task, and closes the auction. The losers return to the explore state and start searching for new tasks.  85    Progress monitoring phase. In this phase, auctioneer monitors the progress of the winner. The auctioneer periodically sends a message to the winner as and when required, until the task is completed  4.2 Auction-Based Multi-Robot Cooperation In this research, a team of autonomous robots make independent decisions, coordinate, resolve conflicts, and if required cooperate with each other to achieve a common goal. The system autonomously determines the number of robots required for the cooperative task based on the properties of the task. The capabilities of each robot in the team are predefined. A bidding function is defined to resolve conflicts among the robots and to ensure that the most suitable robot among the fleet is selected, possibly in cooperation with other robots, to successfully accomplish the task. In this chapter, the market-based approach is developed, the feasibility of the scheme is illustrated, and the performance is evaluated in comparison with the immune-based approach through computer simulations, which are implemented in Java.  4.2.1 Assumptions The present goal is to develop a cooperation strategy among multiple robots, derived from the auction mechanism used in market-based approaches, to collectively carry out a robotic task. In order to properly characterize the present approach, the following characteristics and assumptions of the system are considered:   Heterogeneity: Individual robots have different capabilities and sensors. They may or may not be able to complete the task independently. 86    Communication: All the robots in the team are always located within the communication range of each other.    Environment: The robots work in a dynamic and unknown environment.    Distributed system: No robot has a global view of the system or the environment. No centralized knowledge is available to the robots.  4.3 Auctioned-Based Algorithms for Multi-Robot Cooperation The environmental input and assumptions in all auction-based algorithms for a multirobot system are given below.  Environment input: •  Map M, containing goal location G  •  A set of tasks T = {t1 ,......, t n }  •  A set of partially known properties of tasks P = {p1 ,......, pn }  •  A set of heterogeneous robots R = {r1 ,......, rx }  •  A set of robot capabilities C = { c1 ,......, c y }  •  A set of randomly scattered, unknown, and dynamic obstacles O = {o1 ,......, oz }  Assumptions: •  Each item in T and R sets own a copy of subset of P and C respectively  •  Target task may be executed by 1 or more robot.  •  PRE: specifies the initial conditions that must be true before algorithm starts  •  POST: specifies the final condition that takes place when the algorithm finishes  •  lowercase ‘a’ is auctioneer’s id, ‘w’ is the winner’s id, and i is task index  •  MessageTypes H = HELP, A = ARRIVE, C = COST, W = WINNER, I = IGNORE 87  4.3.1 Auction Algorithm PRE: •  IsTouching (ra , ti ) == true  •  Cra >= Pti || Cra < Pti  •  SingleTaskProcessing (Cra , ti ) == false || auction due to robot failure  POST: •  START CooperativeTaskProcessing (ra , rw , ti ) || START Auction (rw , ti )  Initialize all rx state to SEARCH  ra Broadcast (H , ti , Pti ) to all rx For each rx in R in range, where (x != a) If( rx state != SEARCH || AUCTION) Drop message Else  BFrx ,ti = CalculateBiddingFunction (rx , ti )  rx Send (ra , BFrx ,ti ) rx state = AUCTION rx auction_task_id = i BFrx ,ti = max( BFr1 ,ti , ….., BFrn ,ti )  rw selected by ra ra Broadcast(W, w, t i ) to all rx in R For each rx in R in range, where (x != a) If( rx state == AUCTION && rx auction_task_id == i) If(x != w)  rx state = SEARCH rx auction_task_id = null else if(x == w)  88  rx state = APPROACH. Approach (rx , ti ) Else Drop message  ra waits for an arrival message rx Send(A, ra , t i ) on arrival ra checks the arrival message if( Cra >= Pti ) START CooperativeTaskProcessing( ra , rw , t i ) END  The robots R in a team explore the environment, searching for tasks T. When a task is found, a robot  rx  compares its own capabilities with the partially known properties of  the task. If it does not have the required capabilities, the robot robot  ra  the task  ti  rx  becomes the auctioneer  and auctions the task for bidding to other robots by announcing the properties of  Pt i  r r . After awarding the task to the highest bidder w , the robot a starts searching  for another task. If the robot has the necessary capabilities, it carries out the task alone. If the robot is unable to tackle the task alone, it auctions the task for bidding and coordinates with other robots to get their help in order to cooperatively complete the task. This process is somewhat similar to how a human tackles a task. Knowing her own capabilities, a human predicts whether she can perform a given task by observing or coming physically in contact with the task, and then decides whether to accept the task or not. Upon deciding to tackle the task, she tries to complete it alone. Failing this, the human seeks help from her colleagues to cooperatively carry out the task. 89  The robot ra , the auctioneer robot, announces the task to other robots R in the team by sending a message which describes the task. The team members who are exploring the environment in search for a task accept the message. If more than one robot is capable of tackling the task, the most suitable robot is chosen for the task on the basis of a bidding function. In this method, all capable robots rx change their status to “auction,” calculate their bids based on the bidding function, and submit their bids to the auctioneer robot. The auctioneer evaluates all the bids received from the participating robots and selects the robot that has the highest score. The auctioneer robot announces the winner robot rw and assigns the task to that robot, which then approaches the task t i to cooperate with the auctioneer robot ra in carrying out the task. The robots that have lower scores will return to wander, searching for other tasks. Once the winner robot reaches the task location, it will send an arrival message to the auctioneer robot and subsequently will execute the task.  4.3.2 Single Robot Task Execution Algorithm Once a capable robot finds a task, it tries to execute the task alone or in cooperation of another robot. If the robot can carry out the task alone the following algorithm is executed. PRE: •  rx arrives at target ti .  •  rx state = PROCESS.  •  Cra >= Pti .  •  No other r is currently executing the task ti  90  POST:  ti  • if  ti  is in G. T = T –  ti , G += ti  needs more than 1 robot return FALSE END  while !Done( ti ) ProcessTask( rx ,  ti )  T = T – ti , G = G + ti  rx state = SEARCH END  The robot rx finishes the task ti alone. Once the task is completed the robot’s state changes to “search,” and the robot starts to search for other tasks in the environment.  4.3.3 Cooperative Task Execution Algorithm If more than one robot is required to perform the task cooperatively, the following algorithm is executed. PRE: •  ra , rw arrives at target t i  •  ra , rw state = PROCESS.  •  Cra >= Pti , Crw >= Pti  •  No other r is currently transporting t i .  •  ra is the initiator/leader.  POST: •  t i is in G. T = T – t i , G += t i  rw Send(ARRIVE, ra , t i ) ra Send(START_CTE, rw , t i ) 91  while t i location is not in G ExecuteTask( ra , rw , t i ) T = T – ti , G = G + ti  ra state = SEARCH rw state = SEARCH END  Once a winner robot rw is chosen by the auctioneer robot ra and the winner robot reaches the task t i , it sends an arrival message to the auctioneer robot ra . The auctioneer robot ra , which is also the leader robot, commands the winner robot rw to start executing the task. The two robots execute the task cooperatively until it is completed. Once the task is accomplished the states of both robots change to “search” and they start searching for other tasks.  4.3.4 Fault Tolerance There are two main types of failure in the robots: partial failure and full failure. When a robot fails partially, it loses the ability to use some of its resources. There are two kinds of partial failure. In partial failure 1, the robot can detect and communicate its failure to the team members and can re-allocate its task to them. Here we assume that the communication link of the malfunctioned robot is working even though some parts of the robot have failed. In partial failure 2, the robot is unable to detect the malfunction, communicate it, and re-allocate the task. To make the situation more challenging, we assume that the communication link of the malfunctioned robot is working but the robot is unable to detect its failure. This kind of failure is very difficult to detect, as the robot’s 92  ability to communicate and coordinate with other robots deceives them into thinking that the robot is in working condition. Full failure is identical to robot death. Detection is also difficult in robot death since a dead robot cannot detect its own death and reallocate its task. If a malfunctioned robot does not respond to a teammate’s request to accomplish a task such as pushing or turning the object, after a specific length of time, the team member declares the robot has failed. The collaborating robot monitors and examines the movement of the malfunctioned robot; the teammate is motivated by every unexpected move, and once the motivation level reaches a certain threshold the collaborating robot declares that the malfunctioned robot has failed. Robot failure can have a catastrophic effect on the performance and efficiency of the system. The most critical failures are those of auctioneer and winner robots (the auctioneer robot can fail before or after selecting the winner robot) during the process of selecting the capable and most suitable robot for the task, and of the leader and follower robots during the task execution. The fault tolerance aspect of both these situations is addressed now to evaluate the robustness of the developed approach.  4.3.5 Algorithm When Auctioneer Robot Fails Before Announcing the Winner PRE: •  ra fails before winner is selected  •  All rx received (H, t i , Pt i )  93  POST, 2 possible cases: •  First robot to arrive r1 will start ti auction. R = R – ra . O = O + ra  •  Auction continues, failure will be detected later.  if ( ra failure == Partial Failure 1) Continue AUC else if( ra failure == Partial Failure 2) Continue AUC else if( ra failure == Full Failure) All  rx  will timeout (no winner message received)  For all  rx  in R  Approach( rx , t i ) Let r1 = first  rx  to arrive  r1 Send(A, rx , t i ) Auction( r1 ,  ti  )  END  All the robots that submit bids wait for the auctioneer to assign the task to the winner. If the bidder robots do not hear from the auctioneer, all the bidder robots start moving toward the task. The robot that arrives first at the task broadcasts an arrival message to all other bidder robots and then reports arrival to the auctioneer robot. Not having heard from the auctioneer, the winner robot declares that the auctioneer has failed. Then the winner robot becomes the auctioneer and re-auctions the task.  94  4.3.6 Algorithm When Auctioneer Robot Fails After Announcing the Winner PRE: • •  fails after winner is selected and before winner arrival All  received (W, rw, ti).  POST: • if (  R = R- , O = O +  .  failure == Partial Failure 1) Notify(“  fails”, rw)  rw ally = NULL. else if (  failure == Partial Failure 2)  continue auction (AUC) Note: failure will be detected at task execution stage. else if (  failure == Full Failure)  rw continuously sends approach messages. rw arrives. rw Send(A, rw declare  , t i)  failed after reaching threshold.  END  An auctioneer robot may fail after choosing the winner robot. If the failure is of the type partial failure 1, the auctioneer robot notifies the winner robot about its failure. In case of partial failure 2, the failure is detected during the stage of cooperative task execution, when the auctioneer robot does not perform as expected. If the auctioneer robot does not respond at all and the threshold is reached, the winner robot declares that the other robot has failed.  95  4.3.7 Algorithm for Failure of Winner Robot PRE: •  rw fails while approaching the task  •  All other rx received (W, w, ti) and ignored ti.  POST: •  R = R- rw, O = O + rw.  if ( rw failure == Partial Failure 1) rw Notify(“rw fails”,  )  ally = NULL. Auction( , ti) END else if ( rw failure == Partial Failure 2) Continue Approach(rw, ti) detects rw failure from approach messages containing coordinates After  reaches threshold,  Notify(“rw fails”, rw) ally = NULL Auction( , ti) END else if ( rw failure == Full Failure) After  reaches threshold,  Auction( , ti) END  When a winner robot is selected by the auctioneer robot, the winner robot sends messages at regular intervals to the auctioneer robot so that the auctioneer robot can monitor its movements. The winner robot may fail while approaching the task. In case of partial failure 1, the winner robot notifies the auctioneer robot about its failure. The winner robot is declared as failed if its coordinates do not change as expected, as in 96  partial failure 2. If the auctioneer robot does not receive any messages from the winner robot and the threshold is reached, the auctioneer robot will declare that the other robot has failed.  4.3.8 Algorithm for Robot Failure During Single Robot Task Execution PRE: •  can fail at anytime during task execution  POST:  ry  • if (  replaces  ’. Resume TOS  failure == Partial Failure 1) Auction( , ti, {R}} END failure == Partial Failure 2|| Full Failure)  else if(  ry  detect ti randomly  ProcessTask( ry ,  ti )  END  When a single robot executes a task, partial failure 2 is identical to full failure because in both cases there is no ally robot that can detect the failure of the malfunctioned robot. Any wandering robot that notices the lack of task progress will start executing the task. If the failure is of the type partial failure 1, the malfunctioned robot will auction the task to the team members.  4.3.9 Algorithm for Failure of Leader Robot During Cooperative Task Execution PRE: •  = leader.  •  can fail at any time during cooperative task execution  97  POST: •  if (  and rz move ti away from  ’.  failure == Partial Failure 1) Notify(“  fails”, ry).  shuts down, {R} = {R} –  . {O} = {O} +  = leader. Auction( , ti, {R}) END else if (  failure == Partial Failure 2) continuously repeats ProcessTask reaches threshold on no progress = leader  Auction( , ti, {R}) {R} = {R} -  . {O} = {O} +  END else if (  failure == Full Failure)  {R} = {R} –  . {O} = {O} +  reaches threshold on no progress = leader. Auction( , ti, {R}) END  The leader or the follower robot can fail during cooperative execution of a task. If the failure of the leader robot is of the type partial failure 1, the leader robot notifies the follower robot about the failure. Then the follower robot becomes the auctioneer robot and announces the task. If the failure of the leader robot is of the type partial failure 2 or full failure, the follower robot judges the failure based on the performance of the leader robot. If the leader robot does not perform as expected after giving instructions to the  98  follower robot or does not help in task execution, the follower robot will know of the problem and will declare that the leader robot has failed. The same algorithm with minor changes may be applied to the failure of a follower robot. If the follower robot does not perform as expected, the leader robot declares that it has failed.  4.4 Bidding Function The bidding function determines the most suitable robot among those that possess the capabilities required for the task. The bid value is a function of the distance between the task and the robot, the orientation of the robot, the obstacles in the path between the robot and the task, the velocity of the robot, and the success rate, as given by:  β = μ1 (ξ )−1 + μ 2θ + μ3ο + μ 4ν + μ 4ς  (4.1)  Here μi represents the weights assigned to different variables according to their relative importance in the bidding function. In equation (4.1), ξ is the Cartesian distance between the robot and the task, θ is the orientation between the task and the robot, ο is the obstacles in the path between the robot and the task, provided that the obstacles are in the detection range, and v is the velocity of the robot. The robot that is closest to the object is not necessarily able to reach it first, as the reaching time depends on the robot’s velocity; a robot that is farther from the object but has a higher velocity capability may reach the object sooner. ς is the number of successful culminations of the particular task by a robot. When the robot successfully completes the task, the success rate value is incremented by one. A robot with a greater success rate at a specific task will stand a better chance of tackling that task again, as it has already proven its capabilities. 99  4.5 Simulation Study The simulation platform used in Chapter 3 for AIS-based multi-object transportation and shown in figure 3.18 is used here for market-based multi-object transportation as well, and is shown again in Figure 4.1. In this simulation study, the effect of robot failure on system performance will be addressed.  Green  Object  Object Red Black  Yellow Figure 4.1: The multi-robot multi-object simulation platform.  Figures 4.2 to 4.11 present the results of robot failures, their effect on the system performance, and the robustness of the developed approach. These results are based on the scenario where more than one robot is required to cooperatively transport an object. Both partial and full failures are studied here. Figures 4.2 and 4.3 show the results for failure of the auctioneer robot during coordination before transportation. The auctioneer robot monitors the movements of the winner robot until it reaches the task location. During the movement of the winner robot towards a task, it does not know anything about the health of the auctioneer robot. Once  100  it reaches the object, the winner robot sends an arrival message to the auctioneer robot in order to start the transportation process. On not receiving a reply or not observing the requested action, the winner robot assumes that either the auctioneer robot has failed or two robots are not adequate to transport the object. In either case, the winner robot becomes the auctioneer robot and re-auctions the task. Figure 4.2 shows the effect on time (steps) due to failure of the auctioneer robot. It is evident from the figure that in the absence of any failure the auctioneer and the winner robots spend the least amount of time (steps) to reach the object in order to start cooperative transportation. In case of partial failure 1, the time (steps) taken is less than in both partial failure 2 and full failure, since the auctioneer robot can communicate its failure to the winner robot. The time (steps) taken is the largest in the case of partial failure 2, because the communication link of the malfunctioned robot is operational and thus it can communicate with the winner robot, which misleads the winner robot. Due to this incorrect impression, the winner robot takes more time to reach the motivation threshold level before declaring that the auctioneer robot has failed. If the auctioneer robot dies, the system is able to cope with such failure as well, by auctioning the task to replace the dead robot. Figure 4.3 shows the communication burden on the system due to partial and full failure of the auctioneer robot. Here, the messages include the task announcement by the auctioneer robot, the bids sent by other capable robots to the auctioneer, the winner message sent by the auctioneer to the robot that makes the highest bid for the task, the messages sent by the winner robot to the auctioneer robot from time to time until it reaches the task, and the robot communication in order to choose another robot to replace a failed auctioneer robot. When there is no failure, the messages are at a minimum. In partial failure 2 or full  101  failure, the number of messages increases as the winner robot keeps on sending messages to the auctioneer robot until the motivation level reaches the threshold value. Once the auctioneer robot is declared failed, the winner robot becomes the auctioneer robot and reauctions the task. The entire process of selecting a winner robot starts again, which places an extra communication burden on the system. No Failure, Timesteps Mean =680 Partial Failure 1, Timesteps Mean =1050 Partial Failure 2, Timesteps Mean =3329 Full Failure, Timesteps Mean =1965  5000  Timesteps Before Transportation  4500 4000 3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.2: Effect on execution time (steps) due to partial or full failure of auctioneer robot. 4000 No Failure, Messages Mean =594 Partial Failure 1, Messages Mean =638 Partial Failure 2, Messages Mean =1673 Full Failure, Messages Mean =905  Messages Before Transportation  3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.3: Communication burden due to partial or full failure of auctioneer robot. 102  Figures 4.4 and 4.5 show the results when the winner robot fails before it reaches the object that is to be transported cooperatively. Figure 4.4 shows the time (steps) incurred in different types of failure. The time (steps) taken in partial failure 1 is less since the malfunctioned winner robot communicates its failure to the team members. However, in partial failure 2 and in full failure, the time (steps) taken is more since the auctioneer robot has to wait longer until the motivation level reaches the threshold value, before declaring that the winner robot has failed. Figure 4.5 shows the communication burden on the system due to partial or full failure of the winner robot. Here, messages include the task announcement by the auctioneer robot, the bids sent by other robots to the auctioneer robot, the winner message sent by the auctioneer robot to the winner robot, the messages sent by the winner robot to the auctioneer robot before and after failure, and additional messages pertaining to selection of a winner robot once the previous robot is declared failed. As shown in the figure, the communication burden increases with failure. However, the system manages to cope with all the failures at the expense of extra time (steps) and communication burden. 6000 No Failure, Timesteps Mean =680 Partial Failure 1, Timesteps Mean =858 Partial Failure 2, Timesteps Mean =1955 Full Failure, Timesteps Mean =1539  Timesteps Before Transportation  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.4: Effect on execution time (steps) due to partial or full failure of winner robot. 103  7000 No Failure, Messages Mean =594 Partial Failure 1, Messages Mean =577 Partial Failure 2, Messages Mean =1796 Full Failure, Messages Mean =613  Messages Before Transportation  6000  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.5: Communication burden due to partial or full failure of winner robot. Figures 4.6 to 4.9 show robot failure during cooperative transportation of an object. Here it is pertinent to mention that one of the cooperative robots is the leader and the other is the follower. The leader robot guides the follower whenever necessary until the task is completed (the object is transported to a goal location). All communication is one way, from the leader to the follower. Figures 4.6 and 4.7 show leader robot failure during cooperative transportation. As indicated in Figure 4.6, the robots take more time (steps) to transport an object due to the failure of a leader robot during cooperative transportation. This is the case because the follower robot takes time to make sure that the leader has failed. After declaring the failure of the leader robot, the follower robot becomes the auctioneer robot and auctions off the task to the other team members to replace the leader robot, which also takes time. Figure 4.7 shows that when there is no failure, the leader and the follower robots send on average 357 messages to each other while transporting the object. These messages comprise the messages sent by the leader robot to the follower robot for synchronizing the pushing action, rotating the object, and 104  avoiding obstacles during transportation. When there is a failure, the number of messages increases. Particularly in partial failure 2, the messages increase because the malfunctioned leader continues to send messages to the follower until the follower’s motivation level reaches threshold and the follower declares that the leader has failed. Note that the messages in full failure are less than in partial failure 1 and partial failure 2, since once the leader fails no more messages are sent. However, whether the failure of the leader is partial or full, the follower robot has to start a new auction process to select a winner robot to replace the failed leader, which requires messaging between robots.  No Failure, Timesteps Mean =455 Partial Failure 1, Timesteps Mean =818 Partial Failure 2, Timesteps Mean =3291 Full Failure, Timesteps Mean =1985  5000  Timesteps During Transportation  4500 4000 3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.6: Effect on execution time (steps) due to partial or full failure of leader robot.  105  No Failure, Messages Mean =357 Partial Failure 1, Messages Mean =381 Partial Failure 2, Messages Mean =1917 Full Failure, Messages Mean =390  4000  Messages During Transportation  3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.7: Communication burden due to partial or full failure of leader robot. Figures 4.8 and 4.9 show follower robot failure during cooperative transportation. As the follower robot does not send any messages to the leader robot, there is no concept of partial failure 1 and partial failure 2. When the follower robot does not follow the leader robot’s commands and the motivation level reaches the threshold, the leader robot declares that the follower robot has died. Then the leader robot re-auctions the task to get the help of another follower robot. Figures 4.8 and 4.9 show the time (steps) and the communication burden, respectively, in cases of no failure and failure. The results show that the system can cope with the failure of any robot, whether leader or follower, during cooperative transportation of an object, though it takes more time (steps) and places extra communication burden on the system.  106  No Failure, Timesteps Mean =455 Failure, Timesteps Mean =2025  3500  Timesteps During Transportation  3000  2500  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.8: Effect on execution time (step) due to failure of follower robot. No Failure, Messages Mean =357 Failure, Messages Mean =1578  3500  Messages During Transportation  3000  2500  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.9: Communication burden due to failure of follower robot. When a robot in the explore state finds an object and is able to transport it alone, failure can occur during the transportation process only. Then partial failure 2 and full failure are identical, as there is no ally robot that can judge the failure. In this case, another robot in the explore state finds a stationary object and starts transporting it. In the 107  case of partial failure, the failed robot informs other robots that are in the explore state about the object. The robot chosen based on its capability and bid, approaches the object to take the place of the failed robot, and transports the object. Figures 4.10 and 4.11 show the results for transportation of an object by a single robot. As indicated in Figure 4.10, when there is no failure the robot takes less time (steps) to transport the object. In full failure it takes more time (steps) than in partial failure, as the malfunctioned robot cannot communicate its failure to its teammates. Figure 4.11 shows the communication burden due to failure of a robot during the transportation process. It is evident from the figure that no communication burden is incurred due to full or no failure. This is true since a single robot transports the object and there are no cooperative transportation messages. The figure shows messages due to partial failure only, since a robot is able to communicate its failure to its teammates. The robots in the explore state submit bids for the task, and the task is awarded to the highest bidder, which then takes the place of the malfunctioned robot. 3500 No Failure, Timesteps Mean =455 Partial Failure 1, Timesteps Mean =801 Full Failure, Timesteps Mean =1674  Timesteps During Transportation  3000  2500  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.10: Effect on execution time (steps) due to partial or full failure of a robot during the transportation of an object by a single robot. 108  100  Partial Failure , Messages Mean =34  Messages During Transportation  90 80 70 60 50 40 30 20 10 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.11: Communication burden due to partial or full failure of a robot during transportation of an object by a single robot.  4.5.1 Discussion Market-based approaches can be more centralized or more distributed depending upon the nature of the task and the environment. If the approach is more centralized it produces more optimal results but can be less robust and fault tolerant. On the other hand, a more distributed approach can produce a robust but suboptimal solution. This trade-off can be fine-tuned based on the particular situation. The approach developed in this chapter is predominantly distributed but also has centralized subgroups to improve the efficiency. Unlike in other market-based approaches for multi-robot cooperation (Gerkey and Mataric, 2002a, 2002b), the failure of the leader or auctioneer robot in this market-based approach does not cripple or even adversely affect the system.  109  4.6 AIS- Versus Market-Based Multi-Robot Cooperation The AIS-based control framework and the market-based algorithm for multi-robot cooperation as developed in this thesis, is now compared. The purpose of this comparison is not to advocate the exclusive use of the AIS-based framework in all situations of multirobot cooperation but to make an objective evaluation of the performance of the AIS approach against the well-established market-based architecture. This section examines the two developed approaches, focusing on two characteristics: quality of the solution and the communication requirements. It would also be possible to evaluate the computational requirements, but since modern robots have significant processing capabilities on board and can easily work in parallel, this criterion is not investigated here. The solution quality is a competitive factor that bounds the performance of an algorithm as a function of the optimal solution (Gerkey and Mataric, 2004). The solution quality of the approach is strongly dependent on a number of factors such as the set of robots, the task, and the operating environment. The variables of the solution quality vary from task to task; for example, accuracy is a determining factor in the solution quality in a cooperative assembly task while successful transportation of objects to a goal location in a given time is an aspect of solution quality in an object transportation task. Communication requirements consist of the total number of inter-robot messages sent over the network. In this comparison the size of the message is not considered, as it is the same in both approaches. In addition, it is assumed that a perfect shared broadcast communication medium is used.  110  4.6.1 Common Elements To enable a fair comparison between AIS- and market-based approaches for multirobot cooperation, the systems must be set up so as to ensure the following:   The same simulation platform is used in both approaches.    The variables of binding affinity in AIS and the bidding function in market-based are the same.    The same strategy is used in both approaches to detect different types of failure. The formulation of the threshold for the stimulation level in the AIS approach and the motivation level in the market-based approach are the same.    Message size is considered the same in both approaches.    An upper bound is enforced on the time (steps) and the messages, in order to remove the outliers during simulation. The number of messages and the time interval overshoots when a robot is trapped or stuck in a corner or between obstacles, which exponentially increases the mean time interval and the number of messages. Hence the messages or the time interval that cross the upper bound are removed by the code. This method ensures a fair comparison of the two approaches with respect to the time consumed and the communication burden on the network in completing a task.  4.7 Comparative Analysis Having developed the approaches of AIS- and market-based autonomous and faulttolerant multi-robot cooperation, they are now objectively evaluated for comparison purposes, focusing on the solution quality and the communication requirements.  111  4.7.1 Solution Quality The solution quality of the two approaches is the speed with which the entire task (the transportation of all objects) is completed. Figures 4.12 to 4.19 show comparative analyses of the time (steps) taken to complete the task with and without failures. In case of failure, the results are based on a scenario where the failures in the robots are introduced while the robots handle two subtasks (the transportation of two objects), and the remaining subtasks are completed without failure during one round of simulation. Figure 4.12 shows the comparison of the time taken in the two approaches during the coordination process to select a suitable partner in order to start cooperative execution of the task. Here, it can be seen that the AIS-based approach took less time than the marketbased approach. This is the case because in the market-based approach all the bids go to the auctioneer. The auctioneer has to wait for a specific length of time before evaluating the bids. The auctioneer then decides the winner and sends a winner message to the winner robot. In contrast, in AIS there is no leader and once the initiating robot broadcasts the help messages, each individual robot determines whether it is the winner, and the winner then approaches the task. There is a minimal wait time in the AIS-based approach as compared to the market-based approach.  112  3000  Time steps Before Transportation  Immune Based Timesteps Mean =563 Market Based Timesteps Mean =680 2500  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.12: Comparison of time taken during the coordination process of selecting suitable partners.  Figures 4.13 and 4.14 show the results of partial and full failure of the auctioneer/initiating robot, and Figures 4.15 and 4.16 show the results of partial and full failure of the winner/helper robot. It is clear from the curves that the AIS-based approach has taken less time (steps) during both failures, partial and full. The reason is that in the market-based approach once a robot fails, a new auction process is started. All the robots that bid for the task have to wait until the auctioneer announces the winner, which affects the efficiency of the overall system.  113  5000 Immune Based Partial 1 Timesteps Mean =933 Market Based Partial 1 Timesteps Mean =1050 Immune Based Partial 2 Timesteps Mean =2936 Market Based Partial 2 Timesteps Mean =3329  Time steps Before Transportation  4500 4000 3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.13: Comparison of time taken due to partial failure of auctioneer/initiator robot.  4500 Immune Based Timesteps Mean =1406 Market Based Timesteps Mean =1965  Time steps Before Transportation  4000 3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.14: Comparison of time taken due to full failure of auctioneer/initiator robot.  114  Time steps Before Transportation  6000 Immune Based Partial 1 Timesteps Mean =710 Market Based Partial 1 Timesteps Mean =858 Immune Based Partial 2 Timesteps Mean =1788 Market Based Partial 2 Timesteps Mean =1955  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.15: Comparison of time taken due to partial failure of winner/helper robot. 3000  Time steps Before Transportation  Immune Based Timesteps Mean =1475 Market Based Timesteps Mean =1539 2500  2000  1500  1000  500  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.16: Comparison of time taken due to full failure of winner/helper robot. Figure 4.17 shows the time taken during the transportation of the objects. Here, the market-based approach takes less time than the AIS-based approach because in the market-based approach the leader robot guides the follower robot based on the environmental situation at the particular time, and the follower slavishly follows the 115  orders of the leader. In contrast, in the AIS-based approach, each robot acts independently during transportation; for example, any robot that detects an obstacle in the path guides its collaborator robot to avoid it. However, if both robots detect the same obstacle or different obstacles at the same time, each tries to guide the other, which creates a conflict and wastes time in resolving the conflict. 3000  Timesteps During Transportation  Immune Based Timesteps Mean =486 Market Based Timesteps Mean =455 2500  2000  1500  1000  500  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.17: Comparison of time taken during object transportation. Figures 4.18 and 4.19 show the results of partial and full failure of the leader robot (market-based) or of any of the cooperating robots (AIS-based) during transportation of the objects. Though the AIS-based approach has performed slightly better than the market-based approach, there is no significant difference between the time taken in the two approaches. This is because the follower robot can judge the failure of the leader robot very quickly, which compensates for the extra time taken in selecting another robot to replace the malfunctioned leader robot.  116  The time (steps) incurred due to failure of the follower robot during cooperative transportation of the objects cannot be compared, as there is no follower robot in the AISbased approach. 5000  Immune Based Partial 1 Timesteps Mean =803 Market Based Partial 1 Timesteps Mean =818 Immune Based Partial 2 Timesteps Mean =3255 Market Based Partial 2 Timesteps Mean =3291  Time steps During Transportation  4500 4000 3500 3000 2500 2000 1500 1000 500  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.18: Comparison of time taken due to partial failure of leader robot (marketbased) or of a cooperating robot (AIS-based). 3000 Immune Based Timesteps Mean =1967 Market Based Timesteps Mean =1985  Time steps During Transportation  2800 2600 2400 2200 2000 1800 1600 1400 1200 1000  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.19: Comparison of time taken due to full failure of leader robot (marketbased) or of a cooperating robot (AIS-based).  117  4.7.2 Communication Requirement The number of inter-robot messages that are required to handle a task determines the communication burden on the network. Figures 4.20 to 4.26 present a comparison of the communication burden incurred to complete the task with and without failures. Figure 4.20 gives the comparison of inter-robot messages incurred in the two approaches during the coordination process to select a suitable partner in order to start cooperative execution of the task. The market-based system has sent slightly more messages over the network compared to the AIS-based system, because the market-based approach uses slightly more messages to select a winner robot.  Messages Before Transportation  1600 Immune Based Messages Mean =572 Market Based Messages Mean =594  1400  1200  1000  800  600  400  200  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.20: Comparison of communication burden during the coordination process of selecting suitable partners.  Figures 4.21 and 4.22 present the results of messages incurred due to partial and full failure of the auctioneer/initiator robot, and Figures 4.23 and 4.24 show the communication burden due to partial and full failure of the winner/helper robot. 118  Generally, the market-based approach incurs marginally more messages than the AISbased approach in choosing the winner robot. For this reason, in the market-based approach the communication burden is slightly greater than in the AIS-based approach. However, the difference may increase or decrease under varying conditions. It is clear from the results that the market-based system has consumed more messages than the AIS-based system. Here, the messages include those due to partial and full failure of the auctioneer/initiating robot before and after choosing the winner/helping robot. The failure of the auctioneer robot after it auctions the task but before it announces the winner, will incur more messages to replace the malfunctioned auctioneer robot (see section 4.3.5). However, in the AIS-based system, failure of the initiating robot before choosing a helping robot does not incur an extra communication burden. There is again a slight difference between the two approaches in the number of messages incurred due to the failure of the winner/helper robot. 4000 Immune Based Partial 1 Messages Mean =582 Market Based Partial 1 Messages Mean =638 Immune Based Partial 2 Messages Mean =1593 Market Based Partial 2 Messages Mean =1673  Messages Before Transportation  3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.21: Comparison of communication burden due to partial failure of auctioneer/initiator robot.  119  Messages Before Transportation  1600 Immune Based Messages Mean =762 Market Based Messages Mean =905  1400  1200  1000  800  600  400  200  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.22: Comparison of communication burden due to full failure of auctioneer/initiator robot.  Messages Before Transportation  7000 Immune Based Partial 1 Messages Mean =622 Market Based Partial 1 Messages Mean =577 Immune Based Partial 2 Messages Mean =1717 Market Based Partial 2 Messages Mean =1796  6000  5000  4000  3000  2000  1000  0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.23: Comparison of communication burden due to partial failure of winner/helper robot.  120  1400  Messages Before Transportation  Immune Based Messages Mean =575 Market Based Messages Mean =613 1200  1000  800  600  400  200  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.24: Comparison of communication burden due to full failure of winner/helper robot. Figures 4.25 and 4.26 show the communication burden due to partial and full failure of the leader robot (market-based) or of any of the cooperating robots (AIS-based) during the transportation of the objects. It is evident from the two figures that the communication burden for the two approaches is almost the same. The few extra messages in the market-based approach are due to the re-auctioning of the task.  121  4000 Immune Based Partial 1 Messages Mean =376 Market Based Partial 1 Messages Mean =381 Immune Based Partial 2 Messages Mean =1886 Market Based Partial 2 Messages Mean =1917  Messages During Transportation  3500 3000 2500 2000 1500 1000 500 0  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.25: Comparison of communication burden due to partial failure of leader robot (market-based) or of a cooperating robot (AIS-based).  Messages During Transportation  1000 Immune Based Messages Mean =386 Market Based Messages Mean =390  900  800  700  600  500  400  300  0  100  200  300  400  500  600  700  800  900  1000  Round  Figure 4.26: Comparison of time (steps) executed due to full failure of leader robot (market-based) or of a cooperating robot (AIS-based).  122  4.7.3 Discussion The results given in this chapter show that the developed AIS-based approach performs better in both solution quality and the communication requirement, during the phase of autonomous selection of the helping robot. However, the market-based approach takes less time (steps) during the cooperative execution of the task. The communication burden during the cooperative object transportation is almost the same. The market-based approach incurs slightly more messages than the AIS-based approach.  4.8 Summary In this chapter, an auction-based approach was developed for autonomous multi-robot cooperation. Cooperation planning was not done beforehand but rather was employed by the system when required. Computer simulations were used for the proof of concept and to evaluate the system performance. Both partial failure and full failure were introduced into the robots during different stages of task execution to demonstrate the robustness of the approach. Simulations were carried out on a team of heterogeneous robots performing object transportation to a goal location. The results showed that the developed method was able to successfully complete the desired task in an unknown environment with a dynamic and static obstacle distribution. Furthermore, a comparison was done between the AIS- and auction-based approaches for multi-robot cooperation. The comparison showed that the AIS-based approach was slightly better than the market-based approach with respect to solution quality and communication burden.  123  Chapter 5 Object Pose Estimation 5.1 Overview Pose estimation in a real time is a fundamental requirement in multi-robot cooperation for object transportation. Though there has been substantial growth of research activity in pose estimation of a robot, the pose estimation of objects has not received the much needed attention. Estimation of the Cartesian coordinates (x, y) and the orientation ( θ ) of an object can be used in diverse applications like object grasping and robotic manipulation, motion prediction, and object transportation. This chapter presents an approach to acquire meaningful data on distance and orientation of an object using the laser range finder of a mobile robot for calculating the pose (center location and orientation) of the object with respect to the robot in a global coordinate system. The developed methodology as presented here is an integral part of the present project on multi-robot cooperative object transportation. In the application domain, a group of mobile robots explore the environment for useful objects. Once an object is detected, an appropriate robot estimates the object pose. This includes the center location and the orientation of the object. If the object of interest is heavy and cannot be transported by a single robot, a team of robots in the multi-robot system is called upon to cooperatively transport the object. There are several methods for measuring the pose of a robot or an obstacle; for example, using digital cameras, sonar, or laser distance finders. However, most multi124  robot systems employ digital cameras for this purpose, which offer three key advantages. First, a digital image provides a rich source of information on multiple moving objects simultaneously in the operating environment. Second, there is the possibility to build accurate vision subsystems at low cost. Third, robot cameras observe and understand their operating environment in a “natural” manner similar to how humans use their eyes to observe the world. However, numerous obstacles exist in employing a computer vision system. First, since the object detection has to be done through feature extraction from the image, if the lighting conditions of the environment change, the result can become inaccurate. Second, in the image capturing process, hidden features of the image will not be present in the image. Therefore, it will be difficult to detect an object occluded by another object. Third, different light sources may have different light intensity levels from different directions in the background. In a practical environment, with different light sources and lighting directions, identifying an object can become difficult. Fourth, object detection using digital cameras becomes challenging when the actual orientation or the extracted features are different from those employed in the training process. Fifth, during image processing with large amounts of training data, computational processing power may not be adequate, and this will affect the overall performance of the system. In a multi-robot object transportation task, multiple mobile robots move quickly in an unpredicted manner, and the vision system needs to capture the positions and orientations in a very short time. Therefore, the conventional vision algorithms which are timeconsuming are not feasible here. These algorithms are too complex and computationally demanding for meeting real-time constraints in a multi-robot object transportation system.  125  On the other hand, in an object transportation task, the multiple mobile robots move in a large area that has different levels of illumination. Multi-robot systems usually work in large, unstructured, and unknown environments with uneven lighting conditions. The robots usually move into and move out of sub-areas having different brightness levels. In order to track the robots effectively in such an environment, the vision system must be robust with respect to different illumination conditions. However, most of the existing algorithms do not consider this problem. Object recognition and pose estimation are essential for an object transportation system with mobile robots. Therefore, computer vision alone is not an adequate solution for the detection of objects and obstacles. Sensor fusion may be used as an alternative to overcome the associated problems. This chapter investigates an approach for pose (position and orientation) estimation of an object using a laser range finder and a CCD camera mounted on a mobile robot. Figure 5.1 presents the general scheme of object pose estimation for cooperative part transportation. According to the proposed approach, successful transportation constitutes five steps as outlined below: 1) Initialization of all robots: robot global localization—local localization of all robots are transformed into global localization. 2) Searching for an object: robots start searching for an object of interest to be transported to a goal location. 3) Rearranging of robot pose: once a robot finds a color coded object, it rearranges its pose to center the color blob in the camera frame.  126  4) Object pose estimation: robot estimates the pose of the object to determine a suitable point of contact with the object for transporting it. 5) Transportation of the object: robot transports the object by itself if capable, or seeks help from another robot.  Initialize Robots  Encoder Robot Global Localization  Search Object  Camera  No Is Object Found  Wander  yes  Rearrange Robot Pose  Object Pose Estimation  Laser Range Finder  Transport the Object  Figure 5.1: General scheme of object pose estimation for cooperative object transportation.  This chapter is organized as follows: Section 5.2 describes the test bed in the laboratory. Section 5.3 presents the method of robot global localization. Section 5.4  127  outlines color blob tracking for object recognition. Sections 5.5 presents object pose estimation. An experimental evaluation of object pose estimation is given in Section 5.6. Section 5.7 concludes the chapter with a brief discussion of the main contributions.  5.2 Test Bed The test bed utilized in the present research comprises ActiveMedia Pioneer P3-DX robot and objects of different dimensions (see Appendix). The rugged P3-DX robot has a 44cm × 38cm × 22cm aluminum body with two 16.5cm drive wheels. It includes wireless Ethernet, eight sonar sensors, a gyro, a CCD camera and a laser range finder. Figure 5.2 shows a P3-DX robot. The laser range finder in the present system uses a SICK LMS 200 2D scanner, which has a horizontal range of 180 ° with a maximum resolution of 0.5° . The device produces the range estimation using the time required for the light to reach the target and return.  Figure 5.2: Pioneer P3-DX robot.  128  5.3 Global Localization of Robot Global pose estimation is required since there is more than one robot in the environment of cooperative object transportation. An important means of estimating pose of a robot is odometry. This method uses data from the shaft encoder mounted on a wheel of the robot. The encoder measures the speed and the displacement of the drive wheel after a specific time step and is added to the pose of the robot in the previous time step. With reference to Figure 5.3, the kinematic model of the mobile robot is expressed by the relations presented next. The state vector representing the pose of the mobile robot in the global frame is given by:  q ( k ) = [ x ( k ) y ( k ) θ ( k )]T , where x(k ) and y(k ) are the coordinates of position P in mm, and θ (k ) is the orientation in degrees. Also, D(k ) is the distance travelled between the time steps k and k+1; υ t ( k ) is the robot translational speed in mm/s; T is the sampling time in seconds; θ (k ) is the angle between the robot and the global x-axis;  Δθ (k ) is the rotation angle between the time steps k and k+1; ω L (k ) and ωR (k ) are the angular velocities of the left and right wheels, respectively; r is the radius of the two drive wheels; and d is the distance between the two wheels.  129  Figure 5.3: Global pose estimation of a wheeled robot.  The kinematic model of the mobile robot is given by:  x(k + 1) = x(k ) + D(k ). cos θ (k + 1)  (5.1)  y (k + 1) = y(k ) + D(k ). sin θ (k + 1)  (5.2)  θ (k + 1) = θ (k ) + Δθ (k )  (5.3)  D(k ) = υ t (k ).T  (5.4)  Δθ (k ) = ω (k ).T  (5.5)  υt (k ) = ω (k ) =  ωL (k ).r + ω R (k ).r 2  ω R (k ).r − ω L (k ).r d  (5.6)  (5.7)  It follows that the updated pose state vector is q(k + 1) = [ x(k + 1) y (k + 1) θ (k + 1)]T  130  By using equations (5.1) through (5.7), the robot global pose q (k + 1) for a wheeled robot is computed as follows:  x (k )  T .r    ω L ( k ) + ω R ( k )   (ω R ( k ) − ω L ( k ) )  T .r cos  θ ( k ) +    2 d        T .r   ω ( k ) + ω R ( k )    (ω R ( k ) − ω L ( k ) )  q ( k + 1) =  y ( k )  +  L T .r sin  θ ( k ) + 2 d        (5.8)   ω R (k ) − ω L (k )      d  θ ( k )      5.4 Color Blob Tracking The next step of the approach, as presented in this section, entails detection of the object to be transported with the help of a color blob attached to its vertical surface. For this purpose the Advanced Color Tracking System (ACTS) capability is employed. ACTS is a software tool which in combination with a color camera, allows the application to track up to 320 colored objects at a speed of 30 frames per second. The robots explore the surrounding environment within the sensory range and a robot receives a stimulus if it locates an object containing the color blob. As it locates the object the robot rearranges its pose so as to move the color blob of the object to the center of the camera frame. As illustrated in Figure 5.4, the camera frame is divided into four quadrants. Here, the size of the camera frame is (640 × 480) pixels, while the center of the camera frame is represented by a square of (40×40) pixels. If the color blob falls within the square, it is believed to be in the center. Otherwise the robot rotates its base so that the detected color blob is approximately located at the center line of the camera frame. By centering the color blob in the camera frame, a suitable position is provided for the laser range finder to scan the object for pose estimation. 131  Figure 5.4: Division of camera frame into four quadrants.  5.5 Object Pose Estimation As noted in the previous section, once the robot rearranges its pose to center the color blob within camera frame, the laser range finder that is mounted on the robot is activated to estimate the pose of the object. Figure 5.5 (a) indicates how the laser range finder is used to measure the range. Specifically, the distance is determined by calculating the phase difference between the incident and reflected signals. If the transmitted time from A to B is T, phase difference is φ , and the modulation frequency is L=c  between A and B is given by:  fm  , then the distance  φ 2πf m (Gao and Xiong, 2006). The pose of the object  is estimated relative to the robot, which is then transformed into global pose estimation. This is needed because the robot may have to communicate the object pose to other robots in order to transport the object cooperatively.  132  Figure 5.5 (a): Schematic drawing of laser range sensor.  Figure 5.5 (b): A 180-degree laser range sensor.  5.5.1 Relative Pose Estimation With reference to Figure 5.6, the data on distance and angle are acquired through the laser range finder to calculating the center location and the orientation of the object relative to the robot coordinate system. Note that α and β are the angles corresponding to the distances d1 and d 2 , respectively. Also, d1 = AOR , d 2 = BOR ; YR , OR , and X R are robot coordinates; and d1 and d2 are the distances from the laser range finder to two distinct edges of the object. We have:  d1 cosα = OR X 2  (5.9)  133  d 2 cos β = OR X 1  (5.10)  d1 sin α = AX 2  (5.11)  d 2 sin β = BX 1  (5.12)  By using equations (5.9) through (5.12), the angle that represents the orientation of the object is given by:  θ ′ = tan −1  d 2 sin β − d1 sin α d1 cos α − d 2 cos β  (5.13)  Object orientation can be found by using equation (5.13). The center point of the object OB is given by: 1   AD  d1 cosα + d2 cosβ + (d2 sin β − d1 sinα ),  AB 2  OB ≡    1 d1 sinα + d2 sin β + AD (d1 cosα − d2 cosβ )   2 AB     1 AD (d2 sin β − d1 sinα ) X C = d1 cosα + d2 cos β + 2 AB  YC =  1 AD (d1 cosα − d 2 cos β ) d1 sin α + d 2 sin β +  2 AB   (5.14)  (5.15)  Equations (5.13) through (5.15) represent the object pose relative to the robot pose, and can be given by the vector:  OP = [ X C  YC  θ ′]  (5.16)  Equation (5.16) describes the object with respect to the mobile robot, and is called the relative pose.  134  Figure 5.6: Relative object pose estimation.  5.5.2 Object Global Pose Estimation A homogenous transformation matrix, which represents the translational and rotational motion of one frame with respect to another frame, is applied to represent the relation between the three coordinate systems: the robot coordinate system, the object coordinate system, and the global coordinate system. The homogeneous transformation matrix between the object coordinate system and the robot coordinate system may be expressed by using the result obtained in section 5.5.1: cos θ ′ − sin θ ′ T ′ =  sin θ ′ cos θ ′   0 0  XC  YC  1   (5.17)  135  The robot global pose is known from equation (5.8) in section 5.3. The homogeneous transformation matrix between the robot coordinate system and the global coordinate system may be established as: cos θ T ′′ =  sin θ   0  − sin θ cos θ 0  X Y  1   (5.18)  By using equations (5.17) and (5.18), the homogeneous transformation matrix T between the object coordinate system and the global coordinate system may be computed as:  cosθ − sinθ X  cosθ ′ − sinθ ′ XC  T = T ′′.T ′ = sinθ cosθ Y  . sinθ ′ cosθ ′ YC   0 0 1   0 0 1  cos(θ + θ ′) − sin(θ + θ ′) X C cosθ − YC sinθ + X  T = sin(θ + θ ′) cos(θ + θ ′) X C sinθ + YC cosθ + Y    0 0 1  (5.19)  From equation (5.19), the pose of the object in the global coordinate system is determined as:   x   X C cos θ − YC sin θ + X     O′′ =  y  =  X C sin θ + YC cos θ + Y  θ   A tan(sin(θ + θ ′) / cos(θ + θ ′))  (5.20)  5.6 Experimental Validation In this section the experimental results are presented using the test bed described in section 5.2 (also see the Appendix). Experiments are conducted under two different situations. In the first set of experiments, the robot is kept stationary and the objects are placed at different positions and orientations. In the second set of experiments, the robots 136  move, and once an object is detected by a robot it calculates the pose of the detected object. In both sets of experiments the objects are placed in different poses, as indicated in Figure 5.7.  Figure 5.7: Arbitrary layout of objects.  5.6.1 Experiments with the Stationary Robot Two sets of experiments are conducted in this category. In the first set of experiments, the robot is placed at the global position [0, 100, 0], which gives the X-coordinate, Ycoordinate, and orientation θ , respectively. The seven test points in Table 5.1 represent an object having dimensions 525 mm × 257 mm × 446 mm, placed in the workspace at seven different positions and orientations. First the actual center of the test point is measured, and is compared with the center estimated through the laser range finder that is mounted on the robot. Figures 5.8, 5.9 and 5.10 correspond to the information in Table 5.1. Specifically, Figure 5.8 gives the difference between the actual and the estimated values of the x-coordinate of the center of the object. Figure 5.9 represents the difference between the actual and the estimated values of the y-coordinate of the center of the 137  object. Figure 5.10 indicates the difference between the actual orientation and the estimated orientation of the object. Table 5.1: The actual and estimated object pose results from the first set of experiments with a stationary robot. Test Points  Xactual  Xestimated  error  % error  Yactual  Yestimated  error  % error  θactual  θestimated  error  % error  38.36  15.47  2095  1985.69  109.31  5.22  16  14.17  1.83  11.44  1 2  248  286.36  -219  -241.24  22.24  10.16  1184  1150.34  33.66  2.84  -12  -13.08  1.08  9  3  943  929.25  13.75  1.46  1460  1324.37  135.63  9.29  38  36.96  1.04  2.74  4  1471  1387.21  83.79  5.7  463  350.17  112.83  24.37  70  68.66  1.34  1.91  5  -1364  -1370.01  6.01  0.44  538  536.67  1.33  0.25  -66  -66.84  0.84  1.27  6  -1368  -1388.36  20.36  1.49  1354  1317.24  36.76  2.71  -32  -33.97  1.97  6.16  7  -710  -727.02  17.02  2.4  2298  2274.81  23.19  1.01  -7  -9.31  2.31  33  2000  Displacement (mm )  1500 1000 500  X - actual X - estimate  0 -500  1  2  3  4  5  6  7  X - error  -1000 -1500 -2000  Test Points Figure 5.8: The x-axis error from Table 5.1.  138  D isp lace m e n t (m m )  2500 2000 Y - actual  1500  Y - estimate  1000  Y - error 500 0 1  2  3  4  5  6  7  Test Points Figure 5.9: The y-axis error from Table 5.1.  80  Orientation (degrees)  60 40 20  theta actual  0 -20  theta estimate 1  2  3  4  5  6  7  theta error  -40 -60 -80  Test points  Figure 5.10: The orientation error from Table 5.1. In the second set of experiments the robot global pose is changed to [40, 105, 60]. The dimensions of the object in this experiment are 525 mm × 257 mm × 446 mm. Table  139  5.2 presents the results from the second experiment, and Figures 5.11, 5.12, and 5.13 show the error results corresponding to the information given in this table. The same parameters as in the previous experiment are used. In both sets of experiments, with a few exceptions, the % error remains well within an acceptable range, which is 100 mm in the considered application.  Table 5.2: The actual and the estimated object pose results from the second set of experiments with a stationary robot. Test Points 1 2 3 4  Xactual 248 -219 943  Xestimated 266.56 -169.26 931.65  18.56 49.74 11.35  % error 7.48 22.71 1.2  Yactual 2095 1184 1460  Yestimated 1964.69 1145.34 1303.52  1471  1364.56  5  106.44  7.24  463  327.65  -1364  -1293.14  70.86  5.2  538  534.66  3.34  0.62  -66  -67.67  1.67  2.53  6  -1368  -1339.67  28.33  2.07  1354  1299.93  54.07  3.99  -32  -34.86  2.86  8.94  7  -710  -692.93  17.07  2.4  2298  2258.81  39.19  1.71  -7  -10.21  3.21  45.86  error  130.31 38.66 156.48  % error 6.22 3.27 10.72  θactual 16 -12 38  θestimated 13.06 -13.54 36.56  135.35  29.23  70  67.66  error  2000  Displace m e nt (m m )  1500 1000 500 x - actual 0 -500  1  2  3  4  5  6  7  x - estimate x - error  -1000 -1500 -2000  Test points  Figure 5.11: The x-axis error from Table 5.2. 140  2.94 1.54 1.44  % error 18.38 12.83 3.79  2.34  3.34  error  D isp lace m e n t (m m )  2500 2000 1500  Y -actual  1000  Y - estimate Y - error  500 0 1  2  3  4  5  6  7  Test Points Figure 5.12: The y-axis error from Table 5.2.  80  Orientation (degrees)  60 40 20  1  2  3  4  5  6  7  0  theta actual theta estimate  -20  theta error  -40 -60 -80  Test Points Figure 5.13: The orientation error from Table 5.2.  141  5.6.2 Experiments with a Moving Robot Two sets of experiments are conducted in this category as well. Here, the robots explore the environment in searching for an object to be transported. Once a robot finds the color coded object, it rearranges its pose to center the color blob within the camera frame. Then the laser range finder is activated to estimate the pose and the center of the object. In the first set of experiments, an object having dimensions 430 mm × 537 mm × 594 mm is used. Seven different experiments are carried out. Table 5.3 presents the results from the experiments. Figures 5.14, 5.15, and 5.16 correspond to this table, which show the difference between the actual and the estimated x-coordinate, y-coordinate, and orientation, respectively, of the center of the object.  Table 5.3: The actual and the estimated object pose results from the first set of experiments with a moving robot. Test Points  Xactual  Xestimated  error  % error  Y– actual  Yestimated  error  % error  θactual  θestimated  error  % error  1  1318  1384.52  66.52  5.05  3455  3514.78  59.78  1.73  30  28.57  1.43  4.77  2  2467  2543.12  76.12  3.09  4047  4149.39  102.39  2.53  15  12.95  2.05  13.67  3  1205  1237.39  32.39  2.69  6147  6219.28  72.28  1.18  45  44.21  0.79  1.76  4  2745  2790.32  45.32  1.65  6123  6183.43  60.43  0.99  -15  -16.43  1.43  9.53  5  1465  1487.45  22.45  1.53  7442  7543.27  101.27  1.36  -60  -61.25  1.25  2.08  6  2063  2092.14  29.14  1.41  8522  8632.02  110.02  1.29  -30  -31.37  1.37  4.57  7  2842  2914.42  72.42  2.55  9712  9828.21  116.21  1.2  -75  -76.65  1.65  2.2  142  Displacement (mm)  3500 3000 2500 2000  X - actual  1500  X - estimate  1000  X - error  500 0 1  2  3  4  5  6  7  Test Points  Figure 5.14: The x-axis error from Table 5.3.  Displacement (mm)  12000 10000 8000 6000  Y - actual  4000  Y - estimate Y - error  2000 0 1  2  3  4  Test Points  5  6  7  Figure 5.15: The y-axis error from Table 5.3.  143  60  Orientation (degrees)  40 20 0 -20  1  2  3  4  5  6  7  theta actual theta estimate  -40  theta error  -60 -80  -100  Test points Figure 5.16: The orientation error from Table 5.3.  In the second set of experiments, an object having dimensions 590 mm × 726 mm × 1715 mm is used. Table 5.4 presents the results from these experiments. Figures 5.17, 5.18, and 5.19 correspond to this table, which show the difference between the actual and the estimated x-coordinate, y-coordinate, and orientation, respectively, of the center of the object. Here too the error readings are well within acceptable range. Table 5.4: The actual and the estimated object pose results from the second set of experiments with a moving robot. Test Points  Xactual  Xestimated  error  % error  Yactual  Yestimated  error  % error  θactual  θestimated  error  % error  1  1321  1356.43  35.43  2.68  3486  3509.52  23.52  0.67  30  28.27  1.73  5.77  2  2488  2502.32  14.32  0.58  4023  4112.23  89.23  2.22  15  13.95  1.05  7  3  1223  1235.43  12.43  1.02  6123  6179.72  56.72  0.93  45  43.77  1.23  2.73  4  2724  2787.21  63.21  2.32  6147  6184.17  37.17  0.6  -15  -16.43  1.43  9.53  5  1437  1462.76  25.76  1.79  7412  7513.23  101.23  1.37  -60  -62.24  2.24  3.73  6  2040  2072.34  32.34  1.59  8541  8612.98  71.98  0.84  -30  -32.06  2.06  6.87  7  2825  2903.59  78.59  2.78  9703  9812.73  109.73  1.13  -75  -77.06  2.06  2.75  144  Displacement (mm)  3500 3000 2500 2000  X - actual  1500  X - estimate  1000  X - error  500 0 1  2  3  4  5  6  7  Test Points  Figure 5.17: The x-axis error from Table 5.4.  Displacement (mm)  12000 10000 8000 Y - actual  6000  Y - estimate 4000  Y - error  2000 0 1  2  3  4  5  6  7  Test Points Figure 5.18: The y-axis error from Table 5.4.  145  60  Orientation (degrees)  40 20 0 -20  1  2  3  4  5  6  7  theta actual theta estimate  -40  theta error  -60 -80 -100  Test Points Figure 5.19: The orientation error from Table 5.4. By comparing the % error of the results obtained when the robots were stationary with those when the robots were moving, it is seen that the latter results are better. When a robot autonomously explores the surroundings and finds an object, it stops at a suitable distance from the object, rearranges its pose to centre the color blob within the camera frame, and then estimates the pose of the object. On the other hand, in the case of a stationary robot, the object is placed in front of the robot, and the associated distance may not be appropriate. Consequently, human error may have contributed to the enhanced % error in the associated pose estimation.  5.7 Summary In this chapter a method for object pose estimation was developed for application in cooperative object transportation by mobile robots. A CCD camera, optical encoders and laser range finders were the sensors utilized by the robots. In order to transport the object 146  by robots, first the global localization of a robot was determined. Next, the CCD camera was used to find the object in the work environment. Once the object was identified by using a color blob tracking approach, the robot rotated its base to move the color blob into the center of the camera frame. Finally a laser range finder was used to scan the object and to determine the distance and the orientation angle. The developed approach was carefully tested using a series of physical experiments in laboratory.  147  Chapter 6 Conclusions There has been a substantial growth in the field of multi-robot cooperation. However, designing a multi-robot system that can autonomously perform an assigned task in a reallife application remains a significant challenge. A multi-robot system has to undergo many improvements before it can be used in a real-time environment that is unfamiliar and inhospitable. In this thesis, a control framework based on an artificial immune system was developed that supports cooperative multi-robot task execution in an intricate and unstructured dynamic environment with unknown terrain. The primary design goal as setout in Chapter 1 has been to develop a multi-robot framework that makes the robotic team distributed, flexible, robust, and fault-tolerant. Having developed the control framework, it is appropriate here to review the design requirements outlined in Chapter 1 in order to examine the extent to which the developed methodology meets these criteria.  6.1 Meeting Design Requirements 6.1.1 Flexibility The term flexibility refers to the ability of the robots in a team to modify their actions appropriately in response to changes in the environment or any entity in the system. The feasibility of the developed framework was first studied through simulation and then implemented on a physical team of heterogeneous robots performing object  148  transportation tasks. The robots worked in an environment that was unknown and had a random dynamic and static obstacle distribution. However, a robot as an individual and in the team adapted to environmental changes such as the failure of a team member or the addition of a new member to the team. The framework enhanced the flexibility of the robot team by providing mechanisms for the robots to work with any other robots that use the developed framework; the robots do not need to know the capabilities of the other robots in advance. The approaches developed in this thesis are rather general rather even though tested for a specific task. Though the developed methodology was validated with the proof-ofconcept object transportation experiments, it can accommodate different tasks with ease—e.g., hazardous waste cleanup, human rescue, and so on.  6.1.2 Robustness and Fault Tolerance Robustness refers to the ability of a robotic team to gracefully degrade its performance and maximize its efficiency in the presence of a malfunctioned team member. In the developed framework, no individual robot was responsible for the control of the other robots. Unlike in hierarchical architectures, the failure of an individual robot was not catastrophically damaging. Different types of failure were introduced in the robots, at different stages of the experiments, to verify the robustness of the approach. It is clear from the results presented in chapters 3 and 4 that the robots in the team responded to the failure of a teammate and re-allocated the task for efficient completion of the mission.  149  6.1.3 Local Sensing Capabilities of the Robots Local sensibility refers to the local view of a robot. The developed framework is completely distributed. There was no centralized knowledge or leader that monitored the progress or state of the environment.  The robots in a team did not have the full  knowledge of the task and the environment. The robots could only know about them once the task or any other entity in the environment came within a limited detection radius. This design criterion was helpful in making the framework flexible and fault-tolerant, which is evident from the results presented in chapters 3 and 4.  6.2 Primary Contribution This thesis has made contributions with respect to autonomous cooperation, fault tolerance and robustness, coherence, and real-time operation of a cooperating team of robots in an inhospitable environment. Foremost contribution is the development of a control framework—a novel, autonomous, fault-tolerant cooperative architecture for heterogeneous mobile teams as applied to independent tasks. The AIS-based control framework as developed in the present thesis has the following characteristics: •  Autonomously determines the number of robots required for the task based on its properties  •  Fully distributed at both the individual robot level and at the team level  •  Applicable to robot teams having any degree of heterogeneity  •  Allows for recovery from failure in the individual robots  •  Allows new robots to be added to the team at any time  150  •  No two-way conversation; the robots communicate only when required  •  Scales easily to larger assignments  The communication and coordination strategies among robots were based on Jerne’s idiotypic network theory and the modified Farmer’s computational model as developed in the present thesis. Methodologies for dynamic task allocation and assignment based on robot capabilities and an artificial immune system were developed in the thesis. The control framework has been implemented on both simulated and physical robot teams performing an object transportation task. The results achieved from these demonstrations have validated the framework and allowed the study of a number of important issues in cooperative control. The next main contribution of the thesis is the object pose estimation technique as developed in Chapter 5. In the developed methodology, an object pose was estimated for application in cooperative object transportation by mobile robots. A CCD camera, optical encoders, and laser range finders were the sensors utilized by the robot. Once the object was detected by a robot and identified using a color blob tracking approach, the robot rotated its base to move the color blob into the centre of the camera frame. Finally a laser range finder was used to determine the Cartesian coordinates of the object. Using this information, the robot determined a suitable point of contact with the object for transporting it to a goal location. Compared to the other examined approaches, the approach developed in the thesis was computationally inexpensive, fast, and less susceptible to changes in lighting conditions. The framework for multi-robot cooperation as developed in this thesis was evaluated by comparing it with the well-established market-based auction approach. The results  151  presented in Chapter 4 showed the benefits and drawbacks of the developed approach when compared to the market-based approach. Finally, a physical multi-robot transportation project was developed in the Industrial Automation Laboratory at the University of British Columbia. Naturally, the physical system faced more challenges than the computer simulation system; in particular, sensor noise, wheel slip, mechanical and/or electronic failure, motion constraints, and so on. The experimental results presented in Chapter 3 showed that the developed physical system was able to operate effectively and robustly in a dynamic physical environment with randomly distributed obstacles.  6.3 Limitations and Suggested Future Research Although the developed multi-robot cooperation system has demonstrated good performance in both computer simulation and physical experiments, there are some areas that need improvement. The major limitation of the developed framework is the restriction of the cooperative teams to missions involving loosely coupled tasks with no ordering dependencies. Types of cooperative robotic applications that involve planning, such as cooperative assembly or construction work, are difficult for the developed control framework to handle. Thus, an interesting area for future work will be to incorporate a planning unit in the framework to allow for tightly coupled tasks with tight constraints and ordering dependencies. To improve the efficiency of a cooperative robot team, a strategy needs to be devised that allows an individual robot to learn about the quality of performance of the robot team members in certain tasks and then to use this learned knowledge to determine appropriate actions. 152  Finally, to make the developed framework adaptive, an individual robot in a team must have memory to remember the encounters with the task it had completed earlier. Next time, if the same or a similar task is encountered, the robot should recall its memory in order to determine the optimal way to deal with the task more effectively. The learning, memory, and adaptive processing capabilities of clonal selection theory of an immune system may be utilized to make the system adaptive.  153  Bibliography Arai, T., Pagello, E., and Parker, L. E., “Editorial: Advances in Multi-Robot Systems,” IEEE Trans. on Robotics and Automation, Vol. 18, No. 5, pp. 655-661, 2002. Asama, H., Ozaki, K., Matsumoto, A., Ishida, Y., and Endo, I., “Development of task assignment system using communication for multiple autonomous robots,” J. Robot. Mechatron., Vol. 4, No. 2, pp. 122-127, 1992. Ballet, P., Tisseau, J, and Harrouet, F., “A multi-agent system to model a human humoral response,” Proc. Of IEEE international Conference on Systems, Man, and Cybernetics, Orlando, FL, pp. 357-362, Oct 1997. Bersini, H., and Calenbuhr, V., “Frustrated chaos in biological network,” Journal of Theoretical Biology, Vol. 188, No.2, pp. 187-200, 1996. Cao, Y.U., Fukunaga, A. S., and Kahng, A. B., “Cooperative Mobile Robotics: Antecedents and Directions,” Autonomous Robots, Vol. 4, No. 1, pp. 7-27, 1997. Castro, L., Zuben, F., “The clonal selection algorithm with engineering applications,” Proc. of GECCO’00, workshop on artificial immune systems and their applications, Las Vegas, USA, pp. 36–37, July 2000. Castro, L.N., and Von Zuben, F.J., “Artificial Immune systems: Part I, Basic Theory and Applications”, Technical Report – RT DCA 01/99, FEEC/UNICAMP, Brazil, p. 95, 1999. Chaimowicz, L., Campos, M.F.M., and Kumar, V., “Dynamic role assignment for cooperative robots,” Proc. of IEEE international conference on Robotics and Automation, Washington, DC, pp. 293-298, May 2002. Christensen, A.L., O’Grady, R., and Dorigo, M., “From fireflies to fault-tolerant swarms of robots,” IEEE Trans. Evolutionary Computing, Vol. 13, No. 4, pp. 754-766, 2009. Dasgupta, D. and Attoh-Okine, N., "Immunity-based systems: A survey," Proc. IEEE International Conference on Systems, Man and Cybernetics, Orlando, FL, pp. 369-374, Oct. 1997. Dasgupta, D., and Forrest, S., “Tool breakage detection in milling operation using a negative –selection algorithm,” Technical Report – CS95-5, Department of Computer Science, University of New Mexico, Albuquerque, NM, 1995.  154  Dasgupta, D., Artificial Immune Systems and Their Applications, Springer Verlag, Berlin, Germany, 1999. de Castro, L.N., and Timmis, J.I., “Artificial immune system as a novel soft computing paradigm,” Soft Computing, Vol. 7, No. 8, pp. 526-544, 2003. De Monvel, J.H.B., and Martin, O.C., “Memory capacity in large idiotypic networks,” Bulletin of Mathematical Biology, Vol. 57, No. 1, pp. 109-136, 1995. De Silva, C. W., Mechatronics—An Integrated Approach, Taylor&Francis/CRC Press, Boca Raton, FL, 2005. Deaton, R., Garzon, M., Rose, J.A., Murphy, R.C., Stevens Jr, S.E., and Franceschett, D.R., “DNA based artificial immune system for self-nonself discrimination,” Proc. Of IEEE international conference on Systems, Man, and Cybernetics, Orlando, FL, pp. 862866, Oct 1997. DeBoer, R.J., Hogeweg, P., and Perelson, A.S., “Growth and recruitment in the immune network,” Editors: Perelson, A.S., and Weisbuch, G., Theoretical and Experimental Insight into Immunology, pp. 223-247, Spinger-Verlag, Berlin, Germany, 1992a. DeBoer, R.J., Segal, L.A., and Perelson, A.S., “Pattern information in one and two dimensional shape space models of the immune system,” Journal of Theoretical Biology, Vol. 155, pp. 295-333, 1992b. Dias, M. B., and Stentz, A., “A Free Market Architecture for Distributed Control of a Multirobot System”, Proc. 6th Intl. Conf. on Intelligent Autonomous Systems (IAS), Venice, Italy, pp. 115– 122, July 2000. Dias, M. B., and Stentz, A., “A Market Approach to Multirobot Coordination,” Technical Report CMURI-TR-01-26, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 2001. Diaz, M.B., Zinck, M., Zlot, R., Stenz, A., “Robust multirobot coordination in dynamic environments,” Proc. IEEE International Conference on Robotics & Automation, New Orleans, LA, pp. 3435-3442, May 2004. Diaz. M.B, Zlot.R, Kalra. N, and Stentz. A, “Market-based multi-robot coordination: A survey and analysis,” Proc. IEEE, Vol. 94, No. 7 pp. 1257-1270, July 2006. Dudek, G., Jemkin, M., and Milios, E., “A taxonomy of multirobot systems,” (Chapter 1), Editor: Balch, T., and Parker, L.E., Robot Teams: From Diversity to Polymorphism, AK Peters, Ltd, Natick, MA, 2002.  155  Ekvall, S., Kragic, D., And Hoffmann, F., “Object recognition and Pose Estimation using Color Co-occurrence Histograms and Geometric Modeling,” Image and Vision Computing, Vol. 23 No. 11, pp. 943-955, 2005. Farinelli, A., Iocchi, L. and Nardi, D., “Multirobot systems: a classification focused on coordination”, IEEE Transactions on Systems, Man and Cybernetics Part B, Vol. 34, No 5, pp. 2015 – 2028, 2004. Farmer, J.D., Packard, N.H., Perelson, A. S., “The immune system, adaptation, and machine learning,” Physica D, Vol. 22, No. 1-3, pp. 187-204, 1986 Forrest, S., Perelson, A.S., Allen, L., and Cherukuri, R., “Self-Nonself discrimination in a computer,” Proc. of IEEE symposium on research in security and privacy, Oakland, CA, pp. 202-212, May 1994. Gao, S., and Xiong, M.D., “A New Approach to Improve the Performance of Phase Laser Range Finder,” Journal of Physics: Conference Series, Vol. 48, pp. 838-842, 2006. Gao, Y., and Luo, Z., “Dynamic task allocation method based on immune system for cooperative robots, ”Proc. of 7th world congress of Intelligent Control and Automation, Chongqing, China, pp. 1015 – 1020, Jun 2008. Gao, Y., and Wei, W., “A new multi-robot self-determination cooperation method based on immune agent network,” Proc. IEEE conf on Robotics and Automation, Barcelona, Spain, pp. 390 – 395, Apr 2005. Gerkey, B.P., and Mataric, M.J., “Pusher-watcher: an approach to fault-tolerant tightlycoupled robot coordination, ”Proc. IEEE Conference on Robotic and Automation, Washington, DC, pp. 464-469, May 2002a. Gerkey, B.P., and Mataric, M.J., “Sold!: Auction methods for multirobot coordination,” IEEE Trans. on Robotic and Automation, Vol. 18, No. 5, pp. 758-768, October 2002b. Huntsberger, T., Pirjanian, P., Trebi-Ollennu, A., Das Nayer, H., Ganino, A.J., Garrett, M., Joshi, S.S., and Schenker, P.S., “ CAMPOUT: a control architecture for tightly coupled coordination of multirobot systems for planetary surface exploration,” IEEE Trans. on System, Man, and Cybernetics, Part A: Systems and Humans, Vol. 33, No. 5, pp. 550–559, Sep 2003. Ichikawa, S., kuboshiki, S., Ishiguro, A., and Uchikawa, Y., “A method of gait coordination of hexapod robots using immune network,” Artificial Life and Robotics, Vol. 2, No. 1, pp. 19-23, 2006. Iocche, L., Nardi, D., Piaggio, M., and Sgorbissa, A., “Distributed coordination in heterogeneous multi-robot systems,” J. Autonomous Robots, Vol. 15, No. 2, pp. 155-168, Sep 2003. 156  Ishada, Y., Immunity-Based Systems A Design Perspective, Springer-Verlag, Berlin, Germany, 2004. Ishida, Y., “An immune network model and its applications to process diagnosis,” System and Computers in Japan, Vol. 24, pp. 38-45, 1993. Ishiguro, A., Kondo, T., Watanabe, Y., Shirai, Y., and Uchikawa, Y., “Emergent construction of artificial immune networks for autonomous mobile robots,” Proc. IEEE International Conference on Systems, Man, and Cybernetics, Orlando, FL, pp. 1222-1228, Oct 1997. Jerne, N.K., “Idiotypic networks and Other Preconceived Ideas”, Immunological review, Vol. 79, pp. 5-24, 1984. Jerne, N.K., “Towards a network theory of the immune system,” Ann. Immunol. (Inst Pasteur), Vol. 125C, No. 1/2, pp. 373-389, 1974. Kalra, N., Ferguson, D., and Stenz, A., “Hoplites: A market-based framework for planned tight coordination in multirobot teams,” Proc. IEEE international Conference on Robotic and Automation, Barcelona, Spain, pp. 1170-1177, May 2005. Kay, Y. and Lee, S., “A Robust 3-D Motion Estimation with Stereo Cameras on a Robot Manipulator,” Proc. IEEE Conference on Robotics and Automation, Sacramento, CA, pp. 1102-1107, Apr 1991. Khan, M.T. and de Silva, C.W., “Autonomous fault tolerant multi-robot cooperation using artificial immune system,” Proc. IEEE International Conference on Automation and Logistics, 2008. ICAL 2008 , Qingdao, China, pp. 623-628, Sep 2008. Khan, M.T. and de Silva, C.W., “Autonomous Fault Tolerant Multi-Robot Coordination for Object Transportation Based on Artificial Immune System,” Proc. 2nd International Conference on Robot Communication and Coordination, Odense, Denmark, pp. 1-6, March 2009a. Khan, M.T. and de Silva, C.W., “Immune System-Inspired Dynamic Multi-Robot Coordination, ”Proc. 2009ASME/IEEE International Conference on Mechatronics and Embedded Systems and Applications, San Diego, CA, Aug 2009b (In press) Kose, H., Kaplan, K.K., Mericli, C., Tatlidede, U., and Akin, L., “Market driven multiagent collaboration in robot soccer domain,” (Chapter V-3), Editors: Kordic, V., Lazinica, A., and Merdan, M., Cutting edge robotics, pro literature Verlag, Berlin, Germany, 2005. Kube, C.R., and Bonabeau, E., “Cooperative transport by ants and robots,” Robotics and Autonomous Systems, Vol. 30, No. 1, pp. 85–101, 2000. 157  Lang, H., Wang, Y., and de Silva, C.W., “Mobile Localization and Object Pose Estimation Using Optical Encoder, vision and Laser Sensors,” Proc. IEEE International Conference on Automation and Logistics, Qingdao, China, pp. 617-622, Sep 2008. Lau, H.Y.K. and Wong, V.W.K., “An immunity-based distributed multiagent-control framework,” IEEE Trans. System, Man, and Cybernetics- Part A: System and Humans, Vol. 36, No. 1, pp. 91-108, Jan 2006. Lee, M., “Evolution of behaviors in autonomous robot using artificial neural network and genetic algorithm,” Information Sciences, Vol. 155, No. 1-2, pp. 43-60, 2003. Li, J., Xu, H., Wang, S., and Bai, L., “An Immunology-based cooperation approach for autonomous robots,” Proc. of International Conference on Intelligent Systems and Knowledge Engineering, Chengdu, China, Oct 2007. liu, Z., Ang , M.H. jr. and Khoon Guan Seah, W., “Multi-robot concurrent learning of fuzzy rules for cooperation, ”Proc. of IEEE International Symposium on Computational Intelligence in Robotics and Automation, Espoo, Finland, pp. 713-719, June 2005. Luh, G., and Liu, W., “An immunological approach to mobile robot reactive navigation,” Applied Soft Computing, Vol. 8, No. 1, pp. 30-45, 2008. Mataric, M.J., Nilsson, M., and Simsarian, K.T., “Cooperative multi-robot box-pushing,” Proc. IEEE/RSJ Int.Conf on Human Robot Interaction and cooperative robots, Pittsburgh, PA, pp. 556-561, Aug 1995. Matsumoto, A., Asama, H., Ishida, Y., Ozaki, K., and Endo, I., “Communication in the autonomous and decentralized robot system ACTRESS,” Proc. IEEE Workshop on Intelligent Robots and Systems, IROS, Ibaraki, Japan, pp. 835-840, Jul 1990. Mitsumoto, N., Fukuda, T., Arai, F., and Ishihara, H., “Control of the Distributed Autonomous Robotic System based on the Biologically Inspired Immunological Architecture,” Proc. 1997 IEEE International Conference on Robotics and Automation, Albuquerque, NM, pp.3551-3556, Apr 1997. Miyata, N., Ota, J., Arai, T., and Asama, H., “Cooperative transport by multiple mobile robots in unknown static environments associated with real-time task assignment,” IEEE Trans. Robot. Autom, Vol. 18, No. 5, pp. 769-780, Oct 2002. Mobile Robots Inc., [online]. Available: http://robots.mobilerobots.com/wiki/ACTS Murphy, R., Blitch, J.G., and Casper, J.L., “Robocup/AAAI urban search and rescue events: Reality and competition,” AI Magazine, Vol. 1, No. 23, pp. 37-42, 2002. Musilek, P., Lau, A., Reformat, M., and Wyard-Scott, L., “Immune programming,” Information Sciences , Vol. 176, No. 8, pp. 972-1002, 2006. 158  Park, S., Kim, K., Park, S., and Park, M., “Object Entity-based Global Localization in Indoor Environment with Stereo Camera,” SICE-ICASE International Joint Conference, Bexco, Busan, Korea, pp. 2681-2686, Oct 2006. Parker, L. E., “ALLIANCE: An Architecture for Fault Tolerant, Cooperative Control of Heterogeneous Mobile Robots,” Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Munich, Germany, pp. 776–783, Sep. 1994a. Parker, L. E., “L-ALLIANCE: Task-Oriented Multi-Robot Learning in Behavior Based Systems,” Advanced Robotics, Vol. 11, No. 4, pp. 305-322, 1996. Parker, L.E, “Lifelong adaptation in heterogeneous multi-robot teams: Response to continual variations in individual robot performance,” Autonomous Robots., Vol. 8, No. 3, pp. 239-267, 2000a. Parker, L.E., “Adaptive heterogeneous multi-robot teams,” Neurocomputing, special issue of NEURAP ’98: Neural Networks and Their Application, pp. 75-92, 1998a. Parker, L.E., “ALLIANCE: An architecture for Fault Tolerant Multirobot Cooperation,” IEEE Trans. Robotic and Automation, Vol. 14, No. 2, pp. 220-240, April 1998b. Parker, L.E., “Current state of the art in distributed autonomous robotics,” Distributed Autonomous Robotic Systems, pp. 3-12, Springer-Verlag, Tokyo, Japan, 2000b. Parker, L.E., Heterogeneous Multi-Robot Cooperation, Ph.D. Thesis , Massachusetts Institute of Technology, Cambridge, MA,1994b. Sathyanath, S. and Sahin, F., “AISIMAM-An AIS based intelligent multiagent model and its application to a mine detection problem,” proc. Ist ICARIS, Canterbury, U.K, pp. 2231, Sep 2002. Sathyanath, S. and Sahin, F., “Application of artificial immune system based intelligent multi agent model to a mine detection problem,” Proc. IEEE Int. Conf. Systems, Man, Cybernetics, Hammamet, Tunisia, Oct 2002. Siegwart, R., and Nourbaksh, I., Introduction to Autonomous Mobile Robots, The MIT Press, Cambridge, MA, 2004. Simon, D.A., Herbert, M., and Kanade, T., “Real-time 3-D Pose Estimation Using a High-Speed Range Sensor,” Proc. International Conference on Robotics and Automation, San Diego, CA, pp. 2235-2241, May 1994. Siriwardana, P.G.D., Khan, M.T., and de Silva, C.W., “Object Pose Estimation for Multi Robot Cooperative Object Transportation,” Proc. 2009ASME/IEEE International Conference on Mechatronics and Embedded Systems and Applications, San Diego, CA, Aug 2009 (In press). 159  Spaan, M.T.J. and Groen, F.C.C., “Team coordination among robotic soccer players,” Lecture notes in computer science, Vol. 2752, pp. 409-416, Springer Berlin/Heidelberg, Germany, 2003. Song, M.W., Hutchinson, S., and Vidyasagar, M., Robot Modeling and control, John Wiley & Sons, Inc, New York, NY, 2006. Stone, P., and Veloso, M., “Multiagent systems: a survey from machine learning perspective,” Autonomous Robots, Vol. 8, No. 3, pp. 345-383, 2000. Stroupe, A., Huntsberger, T., Okon, A., Aghazarian, H., and Robinson, M., “Behavoirbased multi-robot collaboration for autonomous construction tasks, ” Proc. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, pp. 1989-1994, Aug. 2005. Stroupe, A., Okon, A., Robinson, M., Huntsberger, T., Aghazarian, H., and Baumgartner, E., “Sustainable cooperative robotic technologies for human and robotic outpost infrastructure construction and maintenance,” Autonomous Robots, Vol. 20, No. 2, pp. 113-123, 2006. Tomono, M., “Environment Modeling by a Mobile Robot with Laser Range Finder and a Monocular Camera,” Proc. 2005 IEEE Workshop on Advanced Robotics and its Social Impacts, Nagoya, Japan, pp. 133-138, June 2005. Vail, D., and Veloso, M., “Dynamic Multi-robot coordination,” Multi-Robot Systems: From Swarm to Intelligent Automata, Vol. II, pp.87-98, 2003. Vargas, P. A., de Castro, L. N., Michelan, R., “An immune learning classifier network for autonomous navigation”, Lecture Notes in Computer Science, Vol. 2787, pp. 69-80, Springer Berlin/Heidelberg, Germany 2003. Wang, Y., Cooperative and Intelligent Control of Multi-robot Systems Using Machine learning, Ph.D. Thesis, Department of Mechanical Engineering, The University of British Columbia, Vancouver, BC, Canada, 2007. Wang, Y. and de Silva, C.W., “An object transportation system with multiple robots and machine learning,” Proc. American Control Conference, Portland, OR, Vol. 2, pp. 13711376, June 2005. Wang, Y. and de Silva, C.W., “Sequential Q-learning with kalman filtering for multirobot cooperative transportation,” IEEE/ASME Trans. On mechatronic, (in press).  160  Wang, Y., You, Z., and Chen, C., “AIN-based action selection mechanism for soccer robot systems,” journal of Control Science and Engineering, Vol. 2009, Article ID 896310, 10 pp, 2009. Wang, Z., Nakano, E., and Takahashi, T., “Solving function distribution and behavior design problem for cooperative object handling by multiple mobile robots,” IEEE Trans. On System, Man, and Cybernetics - Part A: Systems and Humans, Vol. 33, No. 5, pp. 537–549, Sep 2003. Werger, B.B. and Mataric, M.J., “Broadcast of local eligibility for multi-target observation,” Proc. 5th international symposium on DARS, Knoxville, TN, pp. 347-356, Oct 2000. Whitbrook, A.M., An Idiotypic Immune Network for Mobile Robot Control, M.Sc. Thesis, University of Nottingham, England, 2005. Whitbrook, A.M., Aickelin, U., and Garibaldi, J., “Genetic algorithm seeding of idiotypic networks for mobile-robots navigation,” Proc. of 5th International Conference on Informatics in Control, Automation and Robotics, Madiera, Portugal, pp. 5-13, May 2008. Whitbrook, A.M., Aickelin, U., and Garibaldi, J., “Idiotypic immune networks in mobilerobots control,” IEEE Trans. Systems, Man, Cybernetics- Part B, Vol. 37, No. 37, pp. 1581-1598, 2007. Yen-Nien, W., Tsai-Sheng, L., and Teng-Fa, T., “Plan on obstacle-avoiding path for mobile robots based on artificial immune algorithm,” Advances in Neural Networks, pp. 694-703, Springer-Berlin/Heidelberg, New York, NY, 2007. Zhao, K., and Wang, J., “Multi-robot cooperation and competition with genetic programming,” Proc. of European Conference on Genetic programming, Scotland, UK, pp. 349-360, April 2000. Zlot, R. and Stenz, A., “Market-based multirobot coordination for complex tasks,” The international Journal of Robotics Research, Vol. 25, No. 1, pp.73-101, 2006.  161  Appendix Experimental Test Bed To demonstrate the feasibility of the approach developed in the present thesis, two experimental environments are utilized: a cooperative robot simulation and a team of physical mobile robots. Simulation provides predictive capability with a high degree of flexibility. The simulation platform is used to study and debug the developed framework and to test the alternate strategies to its design. The use of simulation also gives the freedom to develop different cooperative multi-robot scenarios with ease, by allowing the construction of a wide variety of robots with different capabilities that would not be available in the laboratory. The speed of simulation is quite helpful for statistical data collection for many types of experiments. The aforementioned flexibilities are not possible with physical experiments. The debugging of the approach when physical robots are used can be very difficult due to the time required to download and re-download the code in the robots, to set up the experimental environment, and to recharge the batteries. Collection of meaningful statistical data for analysis is also very time-consuming, as this requires running physical experiments a large number of times. However, a simulation platform cannot substitute for real-time experiments with physical robots, as a simulation platform is not subjected to real-time dynamic, unpredictable, and inhospitable environments that physical robots have to face. This is the reason that an approach that works in simulation may not work when tested using physical experiments. It is therefore important to validate the approaches developed in  162  the thesis on physical robots. With this in mind, the cooperative task has been implemented on the laboratory version of a team of mobile robots.  A.1 A Cooperative Multi-Robot Simulation Platform The simulation platform as developed in the present work to demonstrate multi-robot cooperation is shown in Figure A.1. A task of object transportation is used as an example in the simulation environment. The environment consists of scattered objects to be transported, a predefined goal location, randomly scattered obstacles, and multiple robots. This simulation platform was developed in Java language using an eclipse environment. Java was chosen to reduce graphical programming time and effort by using the components in Java SWING libraries.  Robot Object Robot  Obstacle  Goal Location Figure A.1: Simulation platform for multi-robot cooperation.  163  A.1.1 Design Requirements Following are the intended requirements and capabilities in designing the simulator. •  Configurable platform o Easily configurable environment o Easy to add or remove objects, robots, and obstacles o Able to assign different characteristics to environment entities  •  Easy to switch between different strategies—e.g., AIS and market-based  •  Able to log data such as time steps, messages, failures, etc.  •  Simulation must terminate when all the conditions are met—e.g., when all the objects are transported  •  All robots must have sensors, limited visibility, and communication abilities.  A.1.2 Sensors and Obstacle Avoidance Three sensors are employed for obstacle avoidance in every robot. If the left and the middle sensors detect an obstacle, the robot will turn right. If the right and the middle sensors detect an obstacle, the robot will turn left. If all the sensors detect an obstacle, the robot will try to turn around. Obstacle avoidance may be required during the transportation of an object. For this, a recursive solution using a polygon method is adopted. A line is drawn from an object to a goal location. If this line is intersected by an obstacle, a value is added to the x or y coordinates to create a branch from the original line to turn the object in order to avoid the obstacle.  164  A.2 Physical Test Bed A physical experimental system has been developed in the Industrial Automation Laboratory to implement multi-robot cooperation by transporting objects to a goal location. An overview of this system is presented in Figure A.2.  Object  Color blob  Obstacle  P 3-AT robot Obstacle  P 3-DX robot  P 3-DX robot  Obstacle  Figure A.2: The multi-robot object transportation system.  In the developed system, three autonomous mobile robots are employed to transport an object to a goal location. When the multi-robot system begins to operate, each robot is informed about its initial position and orientation in the global coordinate system. The robot estimates its latest position and orientation by recording and analyzing the data of  165  the encoder mounted on the wheels and the data of the compass sensors while it moves in the environment. The objects to be transported and the various obstacles are randomly scattered in the environment. The robots have to search for the object while avoiding obstacles, and estimate the pose of the object using sensory data from sonar, a laser range finder, and a CCD camera. In essence, this is a typical local sensing system, and the robots only know a local segment of the overall environment. The object to be transported has a color blob on its vertical surface so that a robot can estimate the position and orientation of the object by identifying the color blob with its own CCD camera, using the approach developed in Chapter 5. If an object without a color blob is detected, it is regarded as an obstacle in the environment.  A.2.1 The Pool of Robots Three mobile robots manufactured by MobileRobots Inc. are utilized in the present research. In the specific project two two-wheel-drive Pioneer 3-DX robots and one fourwheel-drive Pioneers 3-AT are used. The P3-DX has a 44 cm x 38 cm x 22 cm aluminum body with 16.5 cm diameter drive wheels. The wheels are supported by a rear caster and the robot is capable of both translational and rotational motion. The mobile platforms are versatile, agile, and intelligent and offer an embedded computer option and Ethernet based communication. The P3-DX stores up to 252 watt-hours of hot-swappable batteries. The P-3AT has a 50 cm x 49 cm x 26 cm aluminum body with 21.5 cm diameter drive wheels and is an outdoor robot. The skid-steer platform is holonomic and can rotate in place by moving both wheels or can move wheels on one side only to form a circle of 166  40 cm radius. The P3-AT can climb a 45% grade and can move at a speed of 0.7 mps. At slower speeds on flat terrain, it has a payload capacity of up to 30 kg. Built on a core client server model, the P3-DX/AT contains an on-board microcontroller,  server  software,  an  integrated  on-board  PC,  Ethernet-based  communication, and other autonomous functions. The mobile platforms are servers in the client-server architecture. The client can be either an on-board laptop or an off-board PC connected through a wireless router. The appearance of the P3-DX and P3-AT robots is shown in figures A.3 and A.4.  Pan-tilt-zoom Camera Laser range Finder Front sonar Sensors  Figure A.3: P3-DX robot.  167  Figure A.4: P3-AT robot.  The P3-DX/AT robots with allied software have the abilities to: •  Wander randomly  •  Be driven, controlled by keys or joystick  •  Plan paths with gradient navigation  •  Display maps of their sonar and/or laser readings  •  Localize using sonar and laser range finder  •  Communicate sensors and control information relating to sonar, motor encoder, motor controls, user I/O, and battery charge data  •  Test activities quickly with ARIA API from C++ programs  •  Simulate behavior off-line with the simulator that accompanies each development environment  168  A.2.2 Sensors P3-DX/AT contains sixteen sonar sensors as shown in Figure A.5. The front and the rear sonar rings consist of six forward sensors, six rear sensors, and two sensors on each side of the robot. The sensors in the forward and the rear rings are placed at 20 degree intervals. The sonar firing pattern may be controlled through software.  Figure A.5: Sonar arrangement on P-3DX/AT.  The laser range finder in the present system uses a SICK LMS 200 2D scanner, which has a horizontal range of 180 ° with a maximum resolution of 0.5° . The device produces the range estimation using the time required for the light to reach the target and return. A pan-tilt-zoom color camera attaches and extends the capabilities of a variety of video and vision systems and applications. In order to maintain accurate dead reckoning data, Pioneer robots use 500-tick encoders. Their sensing options include bumpers, grippers, a compass, and a suite of other options. A heterogeneous robot team in the present thesis project consists of two P3-DX robots and one P3-AT robot. It is important to note that even though our laboratory  169  (Industrial Automation Laboratory) has several P3-DX robots, and there are some similarities between the P3-DX and the P3-AT, significant variations in the sensitivity and accuracy of their sensors cause them to have quite different capabilities. It is possible to test the heterogeneity by assigning different soft and hard capabilities to the robots through the software or through modifying the nomenclature of an individual robot by installing different sensors and attachments.  A.3 Summary In this appendix, two primary test beds used in the present thesis research were described. In the present scenario, both simulation and physical robot test beds are vital to the research. The simulation platform provides the ability to investigate the control framework by varying the robot capabilities and the environment, and to collect large amounts of data as needed to test the performance, flexibility, robustness, and fault tolerance of the control framework. The physical test bed prevents the use of unrealistic assumptions and has enabled testing the developed methodologies in real world to the fullest extent possible, as the real environment is unknown, unpredictable, and dynamic. It is important to note that neither of the test beds separately provides all these features.  170  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0069726/manifest

Comment

Related Items