UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A behavioral approach to open robot systems : design and programming Zhang, Ying 1989

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1989_A6_7 Z42.pdf [ 4.71MB ]
Metadata
JSON: 831-1.0051235.json
JSON-LD: 831-1.0051235-ld.json
RDF/XML (Pretty): 831-1.0051235-rdf.xml
RDF/JSON: 831-1.0051235-rdf.json
Turtle: 831-1.0051235-turtle.txt
N-Triples: 831-1.0051235-rdf-ntriples.txt
Original Record: 831-1.0051235-source.json
Full Text
831-1.0051235-fulltext.txt
Citation
831-1.0051235.ris

Full Text

A BEHAVIORAL APPROACH TO OPEN ROBOT SYSTEMS: DESIGN AND PROGRAMMING By YING ZHANG B.Sc. Zhejiang University, China, 1984 M.Sc. Zhejiang University, China, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (DEPARTMENT OF COMPUTER SCIENCE) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA October 1989 ©Ying Zhang, 1989 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada DE-6 (2/88) Abstract Current robots have very limited abilities to adapt to the environment, to survive in unstruc-tured situations, to deal with incomplete and inconsistent information, and to make real-time decisions. One reason for this is the limited power of computation; another reason is an in-correct decomposition of the robot architecture and the unsuitable representation with rigid programming style. The core of this thesis is a new design and programming methodology which is needed for more robust, flexible and intelligent robot systems, called open robot systems. We have developed a general framework for open robot systems and a programming method-ology for building open robot systems. Our approach to robot design is: decompose a robot system into a set of hierarchical be-havioral modules according to the logical or physical structure; each of the module is possibly further decomposed into a set of concurrent processes representing sensors, motors and their coordination according to task-achieving behaviors. Our approach to robot programming is: build the behavioral modules based on parallel ob-jects. Programming is a way of designing each individual module and constructing the modules into a complex system. We have built a software system on a real robot arm to simulate the tasks for forest harvest-ing, based on our approach to robot system design and programming. The system is written in Parallel 0+4- in a transputer-based parallel environment. ii Contents Abstract ii Contents iii List of Figures v Acknowledgement vii 1 Introduction 1 2 Related Work 6 3 A Behavioral Approach to Robot Design and Programming 12 3.1 Building an Open Robot System via Behavioral Modules 13 3.1.1 Behavioral Decomposition 14 3.1.2 Structural Decomposition 16 3.1.3 Two-dimensional Decomposition 19 3.2 Building Behavioral Modules via Parallel Objects 21 3.2.1 Characteristics of Parallel Distributed Object Oriented Languages . . . . 21 3.2.2 Programming on Parallel Distributed Object Oriented Languages 23 4 Design of an Autonomous Robot for Forest Harvesting 28 4.1 Behaviors in Forest Harvesting ' 29 4.2 The Robot 30 4.3 Robot Arm Excalibur and its Task Design 33 4.4 Decomposition of Transputer-based Robot Systems 37 4.4.1 The structural decomposition 38 4.4.2 The interface of each module 41 4.4.3 Constructing the System from Behavioral Modules 45 4.5 An Alternative Configuration 48 iii 5 Programming a Real-Time Multi-Sensory Robot System in Parallel C++ 51 5.1 Brief Introduction to Parallel C++ 52 5.2 Constructs in Parallel C++ 54 5.3 Module Design 60 5.3.1 SERVO Module 61 5.3.2 JOINT Module 61 5.3.3 ARM Module 70 5.3.4 ROBOT Module 73 5.4 Current Performance and Further Improvement 75 6 Conclusions and Further Research 81 6.1 Conclusions 81 6.2 Contributions 83 6.3 Further Research 83 A FCP+H An Object Oriented Flat Concurrent Prolog 85 A.l Brief Introduction to FCP++ 85 A.2 FCP++ Program for Figure 3.4 87 A.3 FCP++ Program for Figure 3.6 87 B Parallel C++ for A F S M 91 C Functional Joint Control 98 C l Projection Technique for Joints Controlling Orientation 98 C.2 FJC for Joints Controlling Length 100 Bibliography 101 iv List of Figures 3.1 Structural Decomposition of the Robot System 17 3.2 System Decomposition 19 3.3 Traditional Approach to Programming of Using Sensors and Plans 24 3.4 The Correct Use of Sensor and Plan 25 3.5 Traditional Approach to Motion Planning 25 3.6 Motion Planning under Communicative Process Structure 26 4.1 Functions of Excalibur (copyed from Excalibur manual) 33 4.2 Structural Decomposition of Our Robot System 38 4.3 Current Configuration of Transputer and Controller 40 4.4 Transputer-based Robot System 41 4.5 Interface of ROBOT Module 43 4.6 Interface of ARM 44 4.7 Interface of JOINT module 44 4.8 Interface of SERVO Module 45 4.9 Connection between Different Modules 46 4.10 Alternative Configuration of Transputer and Controller 49 4.11 Alternative Configuration of Behavioral Modules 50 5.1 The Head Declaration of Class frame 56 5.2 The Use of Class frame 57 5.3 Active Object 57 5.4 Communicated Processes Producer and Consumer 58 5.5 Active Objects Producer and Consumer 59 5.6 A Behavioral Chart 60 5.7 SERVO Module 62 5.8 JOINT Module 63 5.9 The Parallel C++ Codes for the Two Processes of SHOULDER 67 5.10 SHOULDER Module 68 5.11 The Parallel C++ Codes for Module SHOULDER 68 5.12 The Parallel C++ Codes for Module JOINT 69 5.13 Position Commands 70 v 5.14 Orientation Commands 72 5.15 Behavioral Chart for findObject 74 5.16 Behavioral Chart for releaseObject 75 5.17 Phase 1: Robot Begin to Search for Trees 77 5.18 Phase 2: No Object Found Along that Direction 78 5.19 Phase 3: Robot Begins to Search in Another Direction 79 5.20 Phase 4: Robot Grasps a Tree and Puts it Down 80 A.l FCP++ Program for Figure 3.4 88 A. 2 FCP++ Program for Figure 3.6 89 B. l The Head File for AFSM 92 B.2 Finite State Machine in C 92 B.3 Processes for Resetting the Machine and Setting Registers 93 B.4 Initialization of AFSM 94 B.5 Suppress Function in Parallel C++ 95 B.6 Multiplex Function in Parallel C++ 96 B. 7 AFSM system in Parallel C++ • • • • 96 C l FJC Projection Technique 99 C. 2 FJC for Joints Controlling Length 100 vi Acknowledgement It is my luck to have Professor Alan Mackworth as my supervisor. It was Alan who first introduced me to the areas of Robotics and Logic Programming, which have become and will still be my major research interest. His perspective view and extremely wide knowledge has inspired my work during the years. Alan's generous financial support is also greatly acknowledged. Most work in this thesis was done under the guidance of Professor Peter Lawrence. It was always pleasant to work with Peter. In fact, the most fruitful result of the thesis comes from the discussions with him. Many people have contributed to this thesis. Don Acton helped me to integrate Parallel C with C++. Real Frenette built the hardwire for interfacing the Arm with transputers and the touch sensor on the gripper. Joe Poon told me how to use the transputer system of E.E. Alex Kean, Jane Mulligan, Felix Jaggi and Herbert Kreyszig read the draft of this thesis and made valuable comments. Last but not least, I would like express my thanks to Runping for his friendship, love and academic cooperation during the years. He was always the first reader of any part of the thesis. Most of the pages embody his contribution from the contents to the style. This thesis is dedicated to my parents and brother. Without their constant support and encouragement, I can never fulfill my goal. vii Chapter 1 Introduct ion A new form of computation is emerging. Propelled by advances in software design and increasing connectivity, networks of enormous complexity known as dis-tributed computational systems are spreading throughout offices and laboratories, across countries and continents. — B.A. Huberman "The Ecology of Computation", The Scientist, May 16, 1988, Page 18. The ongoing research in Open Systems explores a new form of computation [Hub88], which has inspired and will increasingly influence other areas such as AI and Robotics. We call a robot system an Open Robot System if the robot can • work in unstructured and unpredictable environments. By unstructured envi-ronment, we mean that there is no complete description of the world in which the robot is working, since the real world is too complicated to be described completely and ac-curately. By unpredictable environment, we mean that there are some events such as obstacles moving around, dynamically changing of the environment, which are impossible to be predicted in advance. 1 A Behavioral Approach for Open Robot Systems 2 • deal with incomplete and inconsistent information. In a real world, one can receive information from a variety of sources. Information from each of the sources is partial or incomplete, and also changes with time. Besides, there is inconsistency or redundancy in the information flowing from different sources; • accomplish complex tasks and respond in real-time. The robot is able to deal with complex situations (reasoning and planning if necessary) and to react to the stimuli from various sources in real-time. We call this kind of robot an open system because it works in a non-closed world. An open robot should be able to coordinate with other open robots working in the same environment or communicate with human beings. It has incomplete knowledge about its environment but can get various information by its multiple effective sensors. It has no accurate dynamic model nor fine trajectory planner, but can chase moving objects, avoid unpredictable obstacles, and achieve its goal. It should also have the capability of protecting itself from being damaged and of avoiding hurting human beings. This kind of robot is extremely useful for space exploration, undersea applications, and forest harvesting. Most existing robot systems are not open robot systems. Current robots have very limited abilities to adapt to the environment, to survive in unstructured situations, to deal with incom-plete and inconsistent information, and to make real-time decisions. One reason for this is the limited power of computation; another reason is an inappropriate decomposition of the robot architecture and the unsuitable representation with rigid programming style. In previous approaches to robot system design, a robot system is decomposed into a set of sub-systems according to the external functions — sensors (obtaining information from the environment), central brain (reasoning on the knowledge base), and manipulators (acting on A Behavioral Approach (or Open Robot Systems 3 the environment). There are two disadvantages of this kind of decomposition. Firstly, such a decomposition is not suitable for.large and complex systems. The interface between the sub-systems would be extremely complicated as each of the sub-system grows. Secondly, there is no tight coordination between sensors and manipulators so that the system cannot deal quickly with unstructured and unpredictable situations. In previous approaches to robot programming, the action sequence of the robot is precisely determined by the program, and the position of the actuator has to be explicitly represented. There are two serious practical problems and limitations. Firstly, there is no explicit structure for sensor-motor coordination and the use of sensors to handle uncertainties is ad hoc. Secondly there is no explicit structure for multiple layers of control and decision making. The core of this thesis is a new kind of design and programming methodology needed for open robot systems. We have developed a general framework for open robot systems and a programming methodology for building open robot systems. Our approach to robot design is: decompose a robot system into a set of hierarchical behavioral modules according to the logical or physical structure; each of the module is possibly further decomposed into a set of concurrent processes representing sensors, motors and their coordination according to task-achieving behaviors. A behavioral module can be incrementally constructed by adding more processes and com-munication links. Each behavioral module is independent, but communicating with the rest of the system via message passing through channels. Multiple behavioral modules can be assem-bled into a complex system, whose overall behavior is achieved by the communication among the behavioral modules. A Behavioral Approach for Open Robot Systems 4 This approach takes the coordination between sensor, motor and memory as the first prin-ciple of any "living" system, rather than building sensor, motor and memory independently. The hierarchical architecture provides an interpretation of describing a complex behavior from many different levels of details, and also provides a way of implementing a robot system with complex behaviors. Our approach to robot programming is: build the behavioral modules based on parallel ob-jects. Programming is a way of designing each individual module and constructing the modules into a complex system. Parallel distributed and object oriented programming provides both effectiveness and expres-siveness for building real-time and multi-sensory robot systems. Object oriented methodology supports modularity and information hiding technique for large and complex systems. Parallel distributed processing supports parallel processes and communication for constructing motor sensor coordination and various control strategies for real-time systems. As a whole, parallel distributed object oriented facilities provide the preliminary constructs for programming open robot systems. We have built a software Bystem for a real robot arm — Excalibur to simulate the tasks in forest harvesting, based on our approach to open robot system design and programming. The system is written Parallel C++, and implemented in a transputer-based parallel environment. The goal for building such a system is two-fold: • Investigate and explore the suitable framework and the programming methodology for open robot systems. The forest is a good environment for our research and experiment because it is a typical unstructured and unpredictable environment. • Build forest robots which are intelligent, flexible and robust. The forest industry is a A Behavioral Approach for Open Robot Systems 5 big market for robot systems, and autonomous robots have prospective future in the application to forest harvesting. This thesis is only the beginning of our long-term research, whose goal is to build open robot systems with intelligence, flexibility and robustness, which can work in unstructured and unpredictable environments. Outline of This Thesis This thesis is organized as follows: Chapter 2 gives a survey of related work on robot system design and programming, which is most influential to ours. Chapter 3 discusses our approach towards the problem — building the robot system via behavioral modules and building behav-ioral modules via parallel objects. Chapter 4 describes how to build an autonoumous robot for forest harvesting in the transputer-based parallel environment, which demonstrates our ap-proach to design an open robot system. Chapter 5 illustrates how to use parallel C-H a parallel distributed object oriented language, to model the robot system based on our design. Chapter 6 summarizes our current work and point out the directions for the further research. Chapter 2 Related Work After all, a robot is generally reckoned to be a machine 'made in man's image'. — Geoff Simons "Is Man a Robot?" John Wiley & Sons, Page xv. The problem of developing a kind of open robot system has been recognized as a critical area in robotics. The solution will be challenging to the fundamental theory of Artificial Intelligence. While little work on representing such a robot system has been done, some notable contributions have recently been made along this line. Brooks and his colleagues did very interesting work on building artificial creatures. Brooks [Bro86] [BC86] proposed a robust layered control system, called subsumption architecture, for mobile robots. Layers in this architecture are made up of asynchronous modules that communi-cate over low-bandwidth channels. Each module is an instance of a fairly simple computational machine. Higher-level layers can subsume the roles of lower levels by suppressing their out-puts. Unlike the traditional decomposition of a mobile robot control system into functional modules, Brooks decomposed of a mobile robot control system into task-achieving behaviors. Such a decomposition meets the requirements of multiple goals, multiple sensors and robust-ness. With this idea, Brooks [Bro87a] further proposed a hardware retargetable distributed 6 A Behavioral Approach for Open Robot Systems 7 layered architecture for mobile robot control, and successfully built several systems based on this kind of architecture. Herbert, a second generation mobile robot [BCN88], is a completely autonomous mobile robot with an on-board parallel processor and special hardware support for the subsumption architecture, as well as an on-board manipulator and a laser range scanner. The robot is capable of real-time three-dimensional vision, while simultaneously carrying out manipulator and navigation tasks. Connell [Con88] described a behavior-based arm controller that is composed of a collection of fifteen independent behaviors which run, in real-time, on a set of eight loosely-coupled on-board 8-bit microprocessors. Brooks [Bro88b] further suggested one possible mechanism for analogous robot evolution by describing a carefully designed series of networks, each one being a strict augmentation of the previous one. The six legged walking machine, built under this framework, is capable of walking over rough terrain and following a person who is passively sensed in the infrared spectrum. Brooks [Bro87b] also pointed out that robots with insect-level intelligence can be very practical to a large class of jobs in unstructured environments such as home, agriculture and space. The philosophy behind the approach is: • The fundamental decomposition of the intelligent system is not into independent informa-tion processing units which must interface with each other via representation. Instead the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action. The notions of central and peripheral systems evaporate — everything is both central and peripheral [Bro88a]. • The separation of planning and plan execution is an inappropriate approach for the be-havior design. We must move from a state based way of reasoning to a process based way of acting, i.e., to set up appropriate, well conditioned, tight feedback loops between sensing and acting [Bro87c]. A Behavioral Approach for Open Robot Systems 8 However, Brooks did not explore general purpose manipulators. A robot from Brooks' company can only perform particular tasks when it is built. This is very similar to the biological case (e.g. a rabbit is born to be a rabbit), but is not practical for a wide range use. Besides, Brooks did not pay attention to parallel distributed techniques. His augmented finite state machines are simulated in conventional Lisp [Bro89], without the explicit notion of parallel processes and communi cations. Rosenschein [Ros85] proposed the situated-automata approach, which seeks to analyze knowledge in terms of relations between the states of a machine and the states of its envi-ronment over time, using logic as a metalanguage in which the analysis is carried out. This approach, in contrast to the interpreted-symbolic-structure approach which has prevailed in the research of Artificial Intelligence over the decades, provides a way of compromising the power of representation and real-ime execution of AI systems. By this approach, the speci-fication written in modal logic is first compiled into a circuit which is composed of parallel, serial or feedback components [RK87]. The compiled circuit, which has direct links to the environment, can run efficiently. Kaelbling further developed this idea [Kae88] and pointed out that classical planning is inappropriate for generating actions in a dynamic world. He presented a formalism, called Gapps, that allows a programmer to specify an agent's behav-ior using symbolic goal-reduction rules which are compiled into an efficient parallel program. Using the relationship between logic and situated-automata, Rosenschein [Ros89] proposed an approach of synthesizing information-tracking automata from environment descriptions. By this approach, a description of the automata can be derived automatically from a high-level declarative specification of the automata's environment and the conditions to be tracked. The output of the synthesis process is the description of a sequential circuit which at each clock cycle A Behavioral Approach for Open Robot Systems 9 updates the automata's internal state in constant time, preserving the correspondence between the states of the machine and conditions in the environment. The proposed approach retains much of the expressive power of declarative programming while ensuring the reactivity of the run-time system. This approach is interesting in the sense that an inefficient but expressive program can be compiled into an efficient program which can be executed in real-time and react to the environment. However, the syntax of the specification language is based on modal logic which is not easily comprehended. Furthermore, there is no methodology for constructing programs in that language. Therefore, applicability of the approach is limited. Rosenschein and Kaelbling further discussed the architecture for robot systems, in which planning and reactive control are combined [RK89]. They showed that the distinction between planned and reactive behaviors is largely in the eye of the beholder: systems that seem to compute explicit plans can be redescribed in situation-action terms and vice versa. Kaelbling [Kae87] proposed an architecture for intelligent reactive system which is motivated by the desires for modularity, awareness, and robustness: the system should be built incrementally from small components that are easy to implement and understand; at no time should the system be unaware of what is happening; it should always be able to react to unexpected sensory data; the system should continue to behave plausibly in novel situations and when some of its sensors are inoperative or impaired. Kaelbling discussed the relationship between planning and perception and proposed a framework for adaptive hierarchical control. Thus, if the more competent levels fail or have insufficient information to act, the robot will be controlled by a less competent level that can work with weak information until the more competent components recover and resume control. Brooks and Rosenschein did not give a clear view or methodology of programming a robot system, therefore these ideas have not been used widely in Robotics. Up to now, a lot of work has A Behavioral Approach for Open Robot Systems 10 been done on robot programming and real-time operating systems for robot control. Hayward [HH88] investigated the approach to programming multiple arms using Kali — an environment for programming cooperative control systems. However, based on very loosely coupled com-puters, Kali has limited ability to support multi-processing and communication. Narasimhan et.al. [NSH88] described a standard architecture for controlling robots. But this architecture is more like "device-host" oriented, rather than parallel distributed objects. Salkind [Sal88] from the Courant Institute developed an operating system, SAGE, which runs on a Motorola 68020 processor board, and provides a variety of services, including process and memory management, protocol support, and precise timing facilities. The overall communication is, however, based on shared memory approach. Object oriented methodology provides both modularity and data abstraction which can be very useful for programming a large and complex system. Kafura [Kaf88] proposed a way of building real-time systems with a concurrent object oriented programming language. The language's underlying model of concurrency reflects the distributed and concurrent nature of real-time systems while the language'6 object orientation and the reusability properties of class inheritance address the embedded and evolutionary aspects of real-time systems. Languages, however, only cover half of the story, i.e., the basic constructs which can he used. The other half, programming methodology in the language, which usually reflects the theme of the ar-chitecture of the system, are equally important. Smithers and Malcolm [SM87] proposed a behavioral approach to robot task planning and off-line programming. This approach, cor-responding to Brooks' decomposition, is believed to be robust and suitable for uncertainty handling. However, they neither described underlying languages, nor gave any examples to illustrate the programming methodology. t A Behavioral Approach for Open Robot Systems 11 Here in the departments of Computer Science and Electrical Engineering, some important contributions have been done on Robot Vision and overall architectures. Mulligan et. al. [MML89] described a vision system for visual control of the manipulator arm that achieves a tight coordination between perception and action, illustrating a typical module that could be part of an open robot system. Poon [P0088] proposed a kinematics control algorithm — Functional Joint Control (FJC), in which each joint iteratively computes its movement. FJC has two advantages over the other inverse kinematics algorithms. Firstly, it is possible to implement this algorithm in a parallel computing environment, which will speed up the computation. Secondly, it is possible to incorporate the tight coordination between sensors and actuators into this algorithm. These advantages will become obvious in Chapter 5, where we present a distributed control system for a robot arm, using a method derived from FJC. Chapter 3 A Behavioral Approach to Robot Design and Programming Whichever model we take as primary is often largely a psychological matter — a reflection of taste, hobbies, and habits. — Geoff Simons "Is M a n a Robot?" John Wiley k Sons, Page 30 In this chapter, we propose our general framework of robot system design and programming. Section 1 presents our approach, a two-dimensional hierarchical architecture to robot system decomposition. We show that a robot system is generally a typical distributed system and robot control should be considered as a communicative process structure. Furthermore, we argue that such a structure should have a tight coordination between sensors and manipulators, as well as a multiple layer of description of behaviors. Section 2 discusses the characteristics of a parallel distributed object oriented language for robot programming. We illustrate, through two typical examples, the advantages of a parallel distributed object oriented framework over the traditional approach to robot programming. 12 A Behavioral Approach for Open Robot Systems 13 3.1 Building an Open Robot System via Behavioral Modules An open robot system would be equipped with multiple sensors for effectively reacting to the change of the environment and communicating with other agents (robots or human beings). Robot systems of this kind are inherently distributed since multiple sensors and manipulators are often distributed on different processors. Unlike previous approaches in which the computer was taken as a central computing machine, while sensors and manipulators were considered as input-output peripherals, we represent an open robot system as a distributed system. The processes of sensors and manipulators are designed to be independent, but communicating with each other. There are several differences between a sequential and distributed version of robot control. In the sequential programming paradigm control is programmed as an algorithm with suitable data structures for solving algebraic and differential equations of robot kinematics and dynamics: , control = da,tastructures + algorithm In the distributed programming paradigm control is accomplished by a set of processes and their communication: control = process ^ structure + communication-protocol Feedback is considered as a central theme in conventional control theory. However, we consider coordination among multiple processes in a distributed control system as a general mechanism. Feedback, inherent in processes, is a special case of coordination. A Behavioral Approach for Open Robot Systems 14 3.1.1 Behavioral Decomposition Unlike most previous approaches, which decompose the system into sub-systems according to the external functions: sensors, effectors, knowledge base, reasoning and planning mechanisms etc., Brooks [Bro86] proposed an approach of behavioral decomposition, which decomposes the system into task-achieving behaviors. A behavior, modeled as an augmented finite state ma-chine — AFSM, is an integration of perception, action and cognition. An AFSM is composed of a set of input registers, a set of output channels, and a finite state machine. Several behaviors can be connected directly, or via a suppress or inhibit mechanism. All the behaviors are dis-tributed in layers of a hierarchical structure. Primitive behaviors such as avoiding obstacles are distributed in lower layers, while advanced behaviors such as building maps are distributed in higher layers. A system can be built incrementally by starting building most primitive behaviors and adding more and more advanced behaviors. There are several advantages of this approach over previous ones: • Under behavioral decomposition, we can build incrementally complex behaviors and con-nect them in an appropriate way. This architecture is much clearer than the approach that adds unrelated facts and rules to a single database and builts more and more complicated sensors and effectors. • Behavioral decomposition can represent tight coordination between sensors and motors, which is essential for dealing with unstructured and unpredictable environments. Besides, various control strategies can be used to make the robot fulfill certain kind of tasks, while retaining the basic survival mechanisms. Multiple layers of decision making for complex behaviors can also be constructed. A Behavioral Approach for Open Robot Systems 15 • Systems with behavioral decomposition can be robust. Since each behavior is indepen-dent, any behavior only affects the system locally. Even though one behavior may stop functioning, the rest of the system will continue working, We propose a more general approach for behavioral decomposition. Unlike Brooks, we de-compose the system into a set of cooperative processes, instead of AFSMs, suppress or inhibit nodes, which are just the special kinds of processes. Each process consists of a set of input and output channels, as well as a body, which can be any set of functions including channel com-munication, creating new processes, etc. Processes can communicate with one another through input-output channels or via shared memories. A process can function in sensing, acting, as well as coordinating. A group of processes can be taken as an integrated module, which can be used for constructing more complex behaviors. Basically, there are two kinds of processes. One is called a permanent process, which is created whenever the system is instantiated. The other one is a dynamically generated temporal process, which is created for certain tasks and dies when the task finishes. On the other hand, there are two kinds of action and perception. One is called a purposive action or perception, which is task-dependent. The other one is called a reactive action or perception, which is task-independent, but for primitive survival or adaptive mechanisms. In general, a dynamical process corresponds to a purposive action or perception, and a permanent process is for a reactive action or perception. The overall behavior is described by functionality of each process, and the coordination among processes, i.e., how various processes are communicating through channels. The advantages of our generalization are: • the existing theory and methodology on parallel and distributed systems can be applied A Behavioral Approach for Open Robot Systems 16 to robot systems; • A robot system can be built on the top of a general computing environment, instead of on a special purpose hardware; • Part of the system can be dynamically configured via dynamic processes, while Brooks' systems only can have fixed configuration; • The approach of decomposing the behavior into sensorimotor coordinated processes has a strong background with results from psychology and physiology [Bra84] x . Building systems based on this approach can lead us to a more general (probably more fundamental) understanding on the questions such as "What is a living system ?", and finally "What is intelligence ?". 3.1.2 Structural Decomposition An open robot system often consists of many physical or logical parts, for example, a robot can have one or more arms, each of which is associated with a hand at the end; besides, it can have a mobile base, one or more visual sensors; furthermore, the arm is composed of several joints, each of which has an angle position sensor; the hand can be composed of several fingers, each of which is composed of several joints and touch sensors; the mobile base can be composed of wheels or legs, and so forth. A behavior usually involves the operations on these parts and the coordination between these operations. When the structure of the system becomes complicated, it will be difficult to represent a behavior in one level. For example, the "avoid obstacles" behavior would be easily modeled 1 Currently, there are some interesting discussions in the news group called "Cybernetics and Systems", in the computer network, on the relationship between cognitive structures and communicative process structures which can be considered as another support to our approach. A Behavioral Approach for Open Robot Systems 17 as one process, if the structure of the robot is abstracted as a point. However, if we try to represent the same behavior for a more complex robot, which has a base, an arm and a hand, "avoid obstacles" would mean "base avoid", "arm avoid" as well as "hand avoid" obstacles. Finally such a behavior is carried out via the servo motor control. Since an open robot system can be composed of hierarchically distributed logical or physical devices, and exhibits various layers of behaviors, we decompose the system into a hierarchi-cal structure of behavioral modules. From higher to lower levels, for instance, we can have HAND EYE BASE JOINT JOINT SERVO SERVO FINGER FINGER LEG LEG • Figure 3.1: Structural Decomposition of the Robot System ROBOT; ARMs, HANDs, BASE and EYEs; JOINTs, FINGERs, LEGs etc. as the type of behavioral modules over the layers (see Figure 3.1 2). This particular decomposition, however, is only one example of how the system looks. Different systems with different config-urations can have different decompositions. As an example, an arm can be one part of a base, a hand can be one part of an arm, so and so forth. But the principle is the same. Each module has a set of inputs and outputs for communication. All the modules are 2Here we only represent the structural decomposition. A line doesn't mean a link between two modules. A Behavioral Approach for Open Robot Systems 18 working in parallel. The communication between any two modules is achieved by message passing through channels, so that the control of the whole system is totally distributed. Generally, the characteristics of structural decomposition are, • abstraction: the higher level a module is on, the more abstract the operations it de-scribes. Usually the abstract operations can be carried out by several lower level modules. In our example, the behavior of R O B O T could be carried out by lower level behaviors of B A S E , A R M , H A N D and E Y E , the behavior of A R M could be further carried out by the behaviors of J O I N T s . 0 coordination: the higher level a module is on, the more coordinations between different parts take place in it. In our example, R O B O T is the coordinator of B A S E , A R M , H A N D and E Y E , A R M is the coordinator of J O I N T s . The lowest level S E R V O only considers the control of DC motors. The advantages of structural decomposition are: • even though the system is designed to be totally distributed, i.e., modules can be physi-cally on different processors and each module starts functioning independently, there are still multiple levels of coordination between different parts. • we can explain the overall behaviors in different layers, from coarse to fine; on the other hand, we can implement a complex behavior through many layers of abstraction. As an example, the behavior of "grasp" can be implemented by a dextrous hand or a simple gripper; however, this makes no difference to a higher level. A Behavioral Approach for Open Robot Systems 19 3.1.3 Two—dimensional Decomposition We extend behavioral decomposition to a two-dimensional hierarchical structure by adding another kind of decomposition — structural decomposition (see Figure 3.2). The overall sys-Behavioral Decomposition Figure 3.2: System Decomposition tern consists of two orthogonal hierarchies: one is structural decomposition (vertical), which represents the abstraction or the coordination hierarchy; the other is behavioral decomposi-tion (horizontal), which represents the evolution hierarchy — one can incrementally add more behaviors on each behavioral module. A Behavioral Approach for Open Robot Systems 20 An open robot system is developed along both dimensions. Structural decomposition decomposes the system into a hierarchy of behavioral modules according to system's physical or logical structure. A system can be built with a few simple devices at the beginning, and then be elaborated by adding more devices such as sensors and manipulators, or by replacing the simple devices with complex ones. Behavioral decomposition decomposes the behavioral modules into a set of behaviors which are the integration of perception, cognition and action. A behavioral module can be built with simple primitive behaviors at the beginning, and elaborated by incrementally adding more advanced behaviors. For the modules on lower levels of our structural hierarchy, their behaviors are probably independent of any particular task, referred to as reactive behaviors. These behavioral modules have the basic mechanisms for reacting to the environment and for surviving under different situations, as well as following the instructions from higher levels. Higher level behavioral modules usually consist of both task-related behaviors and reactive behaviors. Brooks' system corresponds to the top level module in our structural hierarchy. The two-dimensional decomposition has the advantages of both of structural decomposition and behavioral decomposition. The system built in this structure is a totally distributed system with multiple layers of coordination. Besides, such an architecture is much more robust than both of the one-dimensional decompositions. Even though a module or a behaviors may stop function, the rest of the system will continue working. A Behavioral Approach for Open Robot Systems 21 3.2 Building Behavioral Modules via Parallel Objects 3.2.1 Characteristics of Parallel Distributed Object Oriented Languages The distinction between parallel and distributed languages is very vague. With parallel, we emphasize the parallel computations; with distributed, we emphasize the independence of each computation and the communication between the computations. We call a language parallel and distributed, if we emphasize both aspects. Furthermore, we call an object in a parallel distributed object oriented language a parallel object if the object is defined as a combination of processes and communications. A parallel distributed object oriented language, in general, provides both concurrency and modularity. Usually, it should have the following basic primitives: • primitives for defining classes, which support data abstraction and modularity; • primitives for process definition, creation and synchronization which support concurrency; • primitives for communication between different processes via channels, which support locality. With these primitives, the language has the following characteristics which are very important for behavioral modeling, since they are corresponding to our principles to system decomposition: 1. supporting concurrent multi-agent architecture An object in a parallel distributed object oriented paradigm can model an active agent with a set of communication channels which support message passing between objects. An object can be taken as an instance of a device or a service which is independent, but communicating with the other parts of the system through the designed interfaces. A Behavioral Approach for Open Robot Systems 22 Such a concurrent multi-agent architecture provides a kind of organization with a large collection of computational services and interactions. The resultant system based on this model can be robust, able to deal with uncertainty and local failures, and makes it possible to accomplish tasks under situations with incon-sistency and incompleteness. The role of a parallel object in a parallel program is similar to that of an integrated circuit in a large electronic system. This kind of modularity is critical for both the system's design and maintenance. From a design point of view, system programming is simply a way of choosing suitable objects, and assembling them to achieve some specified behaviors. From the maintenance point of view, changes can be done locally: just "pull-out" the "wrong" component, repair or change it, and "plug-in" the "right" one. 2. supporting communication via channels and decentralized control mechanism For a real-time system, it is often inefficient to keep a global database for communication. The centralized control would become a bottleneck of an open robot system with multi-ple sensors and manipulators. One of the most important advantages of using message passing via channels and decentralized control is to keep locality. This ensures that each component can be kept relatively independent. The control and communication interface would be simple and clear. Furthermore, asynchronous control and channel communi-cation provides the basic constructs for building sensorimotor coordination and multiple layers of decision making. 3. supporting information hiding and abstraction Object oriented programming provides techniques for information hiding [PCW87]. Using A Behavioral Approach for Open Robot Systems 23 these techniques we can easily represent the hierarchical behaviors of the system. On the other hand, parallel distributed techniques provide concurrency, so that each of the behavior modules can run in parallel and distributed over different processors. 3.2.2 Programming on Parallel Distributed Object Oriented Languages Programming a robot system under the parallel distributed and object oriented framework is totally different from programming using a conventional language. The former provides explicit structures for parallel execution of commands and sensorimotor coordination. • The overall program is not a sequence of commands, rather, the program is a structure which represents the processes and their communication. Under this framework, off-line programming for uncertainty or incompleteness can be explicitly represented. We can have many processes to deal with various kinds of sensory information, and the resultant sequence of actions for the manipulators partly depends on the sensory inputs. On the other hand, suitable mapping of processes to processors will make it possible to react to any sensory inputs in real-time. To state the characteristics clearly, let's consider a typical example —a robot manipulating the blocks world, and compare the differences between the traditional program and the parallel distributed object oriented program: If the robot is asked to fulfill certain goals in the blocks world, the previous approach to the programming would be illustrated through a program shown in Figure 3.3 3 . Here, the sensory information is only derived from the first function call. This program generates a sequence of commands: using the sensors to find out the initial condition of the block 3 We use Prolog to give the example. The functionality would be the same as any other sequential languages. A Behavioral Approach for Open Robot Systems 24 7. -block_world(GoalState):-se e _ i n i t i a l _ c o n d i t i o n ( I n i t i a l S t a t e ) , planner(InitialState, GoalState, Plan), execute(Plan). % -Figure 3.3: Traditional Approach to Programming of Using Sensors and Plans world, generating a plan based on the condition and the desired goal, and executing this plan which is a sequence of action commands such as "move to A", "pick up B" etc. The problem of this program is that the use of sensor information is once and for all. The robot is unaware of the environment when it concentrates on generating the plan and executing the plan. A robot of this kind cannot be used in unstructured and time-varying or unpredictable environments, since the sudden changes can cause an accident or even damage the robot. In the parallel and distributed paradigm, we can construct a program which uses sen-sors to get the continuous information, use a planner to continuously generate actions and uses manipulators to performs the actions. These three components are working in parallel communicating with one another. So that any changes of the conditions would affect the plan generated and also affect the action sequences. Our program generates a communicative process structure as shown in Figure 3.4, where each box is a parallel object (agent), or a process. Lines between two boxes are communication channels. • Since sensors and manipulators can have an explicit coordination under the new program-ming paradigm, the program for motion becomes easier and clearer. A Behavioral Approach for Open Robot Systems 25 SENSORO GOAL PLANNER SENSORi EXECUTOR Figure 3.4: The Correct Use of Sensor and Plan % task:-see_object(Fl), moveto(Fl), grasp, see_central_place(F2), moveto(F2), release. I Figure 3.5: Traditional Approach to Motion Planning As an example, consider the execution of a task "pick up a block, put it on the central place". In the conventional approach, the program have to specify the the complete trajectory of motion (see Figure 3.5). This program first sees the position and orientation of the object, which is represented as a kinematic frame F l , then issues a command to the robot for moving to that frame, followed by another command for grasping the object, etc. The same problem occurs as the one in the previous example. If, during the execution, A Behavioral Approach for Open Robot Systems 26 the object is moved (by other robots or human beings) or an obstacle suddenly appears on the path of the motion, the task will simply fail. Besides, this kind of representation for motion makes the vision system extremely difficult to build, since the visual processing is asked to give the precise position and orientation which have six degrees of freedom. In parallel and distributed framework, the program represents a communication struc-ture between the processes move, see etc. The function of vision is to give continuous information (probably not so precise at times) to guide the motion. The "move-to" task is coordinated with the built-in permanent process "avoid obstacles" as shown in Figure 3.6. S E N S O R 1 continuously obtains the information about the object, such as direction object description S E N S O R 1 direction MOVETO signal S E N S O R 2 AVOID Figure 3.6: Motion Planning under Communicative Process Structure and distance; S E N S O R 2 continuously outputs the signals of obstacles, which will affects the direction of the movement. The corresponding programs for Figure 3.4 and Figure 3.6 written in FCP-H Object Oriented Flat Concurrent Prolog are described in Appendix A. A Behavioral Approach for Open Robot Systems 2 7 Robot System Design and Programming: In the next two chapters, we present an application of the framework we proposed here. A robot system design and programming involves the following steps, which can be inter-leaved during the whole procedure: • Decomposing the robot system into a hierarchical structure of behavioral modules based on configuration of the robot hardware and parallel distributed computing environments. • Designing the interface of each module and communication protocols between different modules. • Modeling the behavioral module using parallel distributed object oriented languages — implement behavioral modules via concurrent processes and communication. Chapter 4 Design of an Autonomous Robot for Forest Harvest ing General system theory is a set of related rules, definitions, assumptions, propo-sitions, etc. — all of which relate to the behavior of organizations, of whatever type. — Geoff Simons "Is M a n a Robot?" John Wiley and Sons, Page 93 In this chapter, we illustrate our approach, the structural decomposition, by going through the design procedure for an autonomous robot in a transputer-based multicomputer environ-ment. Section 1 describes the desired behavior of an autonomous robot in a forest environment; Section 2 discusses the installation of robots for such an environment in general; Section 3 concentrates on a particular robot arm in our lab — Excalibur 1 , and the tasks we want the robot to perform for simulating the forest harvesting work; Section 4 discusses the structural decomposition of the current system, the interface design for each behavioral module. Sec-tion 5 proposes an alternative structural decomposition under another configuration of the transputer system and the robot controller. 1 Robotics Systems International, Sydney, B.C. Canada 28 A Behavioral Approach for Open Robot Systems 29 4.1 Behaviors in Forest Harvesting Much research has been done on the application of tele-robotics (tele-operation) to forest harvesting by a group led by P. D. Lawrence in Electrical Engineering at UBC. We propose here an approach to building totally autonomous robots which can take over the dangerous and tedious work without the assistance of human beings. Some tasks or behaviors for such robots are: • moving logs: the robot walks around, puts the scattered logs into a pile of logs, or puts the logs onto a truck; • cutting down trees: the robot looks for the trees which should be harvested, and cuts them down; • planting new trees: the robot digs a hole at the right place, and puts a new seedling. An autonomous robot, unlike a tele-robot, performs these tasks independently. It is a sensor-guided agent with the essential mechanisms for surviving and avoiding hurting human beings. On the other hand, an autonomous robot for the forest environment, unlike one for the assembly industry, performs tasks which do not require high accuracy or speed, but require real-time reaction to the changes of the environment. There are no well-defined world models for the forest environment, and there are no pre-planned action sequences for the robot manipulator. Each autonomous robot is programmed for performing a particular task or the combination of tasks. We can have many such autonomous robots working together in a large forest: some moving logs, some cutting down trees and some planting new seedlings etc. These autonomous robots may replace the tele-robots some day. A Behavioral Approach for Open Robot Systems 30 4.2 The Robot An autonomous robot for forest harvesting should be an open robot which may be equipped with multiple end-effectors, multiple manipulators, as well as flexible and robust sensors. Physically, we prefer to use multiple simple sensors, each of which needs very little compu-tation, rather than a complex one which requires a great amount of computation. . Logically, we need multiple meaningful sensory signals and distributed memory, instead of a complicated description of the whole world in a large database. Currently, there is a variety of sensors which are very useful for our autonomous robot [FGL87]: • range sensor — a range sensor measures the distance from a reference point (usually on the sensor itself) to the objects within the operational range of the sensor. Range sensors are mostly used for robot navigation and obstacle avoidance. • proximity sensor — a proximity sensor outputs a binary signal which indicates the presence of an object within a specified distance interval. Typically, proximity sensors are used for near-field work in connection with object grasping or avoidance. • touch sensor — touch sensors can generally be classified under two categories: binary and analog. A binary sensor is like a switch which responds to the presence or absence of an object. An analog sensor, on the other hand, outputs a signal in proportion to a local force. Touch information can be used to locate and recognize objects, as well as to control the force exerted by a manipulator on a given object. • force and torque sensor — a force and torque sensor measures the Cartesian compo-nents of force and torque acting on the joints of a robot arm or measures the deflection of the mechanical structure of a robot due to external forces. Force and torque sensors are A Behavioral Approach for Open Robot Systems 31 used primarily to feedback the reaction forces to the robot controller for adaptive force control. • joint position sensor — a joint position sensor measures the current joint angles which are mainly used in the kinematics loop for the position control. A system using visual sensing for measuring joint angles is presented in [MML89]. • vision sensor — the advantage of using vision sensors, in most cases, cameras, is that a vision sensor can capture various kinds of information. It is generally believed that for human beings ninety percent of sensory information is from the visual input. However, visual perception needs much more computation than other sensing. There are two ways to alleviate the problem: one is to restrict the visual task to recognize only the simple features related to a particular task; the other is to use an independent computational unit — a processor in a parallel environment, for performing the major computation and sending out the important features in real-time. The reason for using more than one of these sensors for an autonomous robot is two folds: • robustness and fault-tolerance are two of the major concerns for the robot working in a hazardous environment. W i th multiple sensors and the appropriate interconnections, a robot can continue functioning even though some sensors are broken. • By using multiple sensors processing the sensory information in parallel we can improve the real-time response of a robot to some extent. In addition to multiple sensors, an autonomous robot may need multiple manipulators and multiple end-effectors which are coordinated to fulfill a particular task. As for the job of "cut A Beiaviorai Approach for Open Robot Systems 32 down trees", the robot needs a mobile base (a legged robot or a wheeled vehicle) to "walk" around to look for the trees, and an arm with a gripper to grasp the tree and a saw to cut down the tree. It is not necessary to have the closed-form inverse kinematics for the arm of such a robot. There are two drawbacks of using closed-form inverse kinematics for open robot systems: • there are heavy computations which cannot be distributed over parallel processors; • the calculation is once and for all. There is no sensory feedback during the execution. Instead, a functionally decomposable [P0088] arm would be very useful, since each of its joints can independently calculate the movement and react to the related sensory information. For example, the joints mainly for position can react to the range sensor to track an object and to avoid obstacles while the joints mainly for orientation can react to the proximity sensor to guide the end-effectors to grasp objects. Besides, an arm would be better to have six degrees of freedom so that it can reach any position in space and its end-effectors can take any orientation. A robot may have various kinds of tools (end-effectors), including graspers and hands, for fulfilling various kinds of tasks. For example, a robot with a gripper which can open and close can perform the useful tasks such as "pick up" or "release" objects. A hand may be used for more flexibly grasping of objects of irregular shape. The mobile base, usually a wheeled vehicle or a legged robot, is needed for an autonomous robot working in forest. A wheeled vehicle would be easier to design but not be suitable for rough terrain. A legged robot, however, is more expensive to build. A Behavioral Approach for Open Robot Systems 33 4.3 Robot Arm Excalibur and its Task Design Even though our goal is to build open robot systems for forest environments, we would like to build an autonomous robot system in our lab at the first stage, which can simulate tasks for forest harvesting. Currently, we have a six joint PUMA-like arm, Excalibur, with a simple one-switch touch sensor on its gripper (see Figure 4.1). The functions of the joints [RSI88] are: WRIST FLEX T 0 0 L ROTATE Figure 4.1: Functions of Excalibur (copied from Excalibur manual) • swing — this joint connects the base to the turntable, and rotates the entire robot clockwise or counterclockwise on the base; • shoulder — this joint connects the turntable to the upper arm, and provides for the A Behavioral Approach for Open Robot Systems 34 main pitching motion of the arm; • elbow — this joint connects the upper arm to the upper forearm, and provides a pitching motion too; • upper—rotate — this joint connects the upper forearm to the lower forearm, and provides a rotational motion about the longitudinal axis of the entire forearm; • wrist—flex — this joint connects the lower forearm to the wrist. It may provides a pitching or a yawing motion, depending on the orientation of the upper-rotate; • tool—rotate — this joint connects the wrist to the end-effector, and provides a con-tinuous rotate mode; • end—effector — the current Excalibur is equipped with a parallel-jaw gripper. How-ever, this can be replaced by the user with any number of tools designed to do particular jobs. We installed a simple one-switch touch sensor on the gripper to obtain the infor-mation of the presence of an object. From the above description, we see that this arm is functionally decomposable and can be divided roughly into two independent parts: the first three joints, swing, sholder and elbow, are for positioning the arm and the last three joints, upper-rotate, wrist-flex and tool-rotate, are mainly for the orientation of the end-effector. Excalibur has a master arm, a scale model of the manipulator, used to control the manipulator manually: the manipulator is designed to follow the master arm in real-time. We did not use it for the current system, but considered its use in an alternative configuration of the system which we will describe in Section 4.5 of this chapter. A Behavioral Approach for Open Robot Systems 35 The controller of Excalibur consists of a digital computer and several analog circuits for control each joint and the gripper. The controller accepts a set of commands which can be directly issued from a host computer or an ASCII terminal. On the other hand, the controller can bypass the digital computer by connecting the analog circuits with the manual port. The cable from the master arm can plug in to this port so that analog position signals from the master arm can be used to control the joint movements. Two of the important commands for kinematics control are: • "moveby J'J J2 jz j\ js ie" causing joints to move, where j,- is the amount of the i-th joint's movement, and • "manip" returning the current angles of the six joints from the controller. These angles are used in the kinematics loop. The commands for controlling the gripper are: "open step" and "close step" where step is the number of steps the gripper will open or close. We can, through the parallel digital input/output port, get the current state of the touch sensor using the command "inp". One of our research goals is to program the robots like Excalibur to perform various tasks such as the touch-sensor guided search described as below. When the robot begins to work, it starts to search for the "trees" within certain range. The arm goes forward with the gripper opening and closing constantly. If touch sensor detects an object when the gripper is closing, then the gripper will keep holding the object. In this situation the hand turns downward and the arm moves down towards ground until an outside signal (a simulated range sensing from the terminal) comes, which represents that the arm is almost A Behavioral Approach for Open Robot Systems 36 touching on ground. This same signal also triggers the gripper to open so that the object is released on the ground. If the touch sensor detects no object ("tree") before the arm reaches its limited position, the arm moves back and turns an angle (say about 15 degrees), then the robot starts to search along the new direction. There are two ways to program such robots. In the traditional way the programmer has to know exactly where the object is, and the commands issued to the robot controller cause the robot to move to the positions specified in the commands. This kind of programming paradigm is obviously not appropriate for our purpose. In programming a robot to perform the task described previously, we have no prior informa-tion on the locations of the objects, and do not know when the gripper should grasp or release objects. The action sequence for achieving the task depends on the sensory information (here signals are from the touch sensor and the simulated range sensor). However with the current installation Excalibur can only perform some simple tasks (like the one described previously). We have no range sensor to measure the distance of the object, no proximity sensor to detect the possibility of the presence of the object, no vision sensor to recognize the shape of the object, no force and torque sensor to measure the external force acting on the end-effector. The robot is blind and numb. We are planning to install more sensors such as a range sensor and a proximity sensor, a force or torque sensor, or a vision sensor on the Excalibur in future so that it can perform more interesting tasks. In programming a robot with multiple sensors we can follow the same methodology as the one which is discussed in Chapter 3 and will be further illustrated through case study presented in the rest sections of this chapter and in the next chapter. As an example, if we have a range sensor and a proximity sensor, in order to carry out the A Behavioral Approach tor Open Robot Systems 37 same task, the robot can use the range sensor to detect the position of the tree, thus to guide the motion of the arm (forward and turn); at the same time the robot can use the proximity sensor to detect the existence of the object and to guide the operation of the gripper. When the gripper grasps an object the robot can use the range sensor to detect the distance from the ground and to guide the release operation (gripper opening). Further, if we add another vision sensor or an array of analog touch sensors, which are used to detect the orientation of the trunk of tree, we can design the robot to perform the tasks such as cutting off the branches of trees. This is accomplished by the robot arm moving along the trunks of trees and performing the cutting operation. A force and torque sensor is needed if the load (log) may be very heavy so that it will cause the change of the dynamics of the arm. In this case force control becomes necessary. 4.4 Decomposition of Transputer-based Robot Systems In this section we present the design of our robot control system which is to run on a transputer-based multiprocessor computing environment. A transputer based multicomputer is a general purpose parallel and distributed computing environment which has become popular in the recent years. A transputer is a VLSI building block for concurrent processing systems, comprising of a processor, the on-chip memory and four serial communication links. Two transputer can be connected by connecting a link of one transputer to a link of the other. The flexibility of the reconfiguration of the transputer network makes it possible to support various kinds of parallel and distributed applications. For a real-time application, we need not only consider the logical decomposition of the system, but also consider the mapping of the logical decomposition to the physical transputer network so that the overall computation and communication can be A Behavioral Approach for Open Robot Systems 38 optimized. 4.4.1 The structural decomposition We decompose our robot system into a hierarchy of behavioral modules, according to the principles of structural decomposition discussed in Chapter 3 (see Figure 4.2). SERVOs, at Figure 4.2: Structural Decomposition of Our Robot System the lowest level in the hierarchy shown in figure 4.2, are responsible for sending or getting signals to or from Excalibur. ROBOT, at the highest level, is designed to perform the particular tasks. ARM, at the intermediate level, plays a role of the coordination of the six joints. Each joint module, from SWING to TOOL, is for calculating the joint movement. Such a decomposition is based on the fact that the joints in Excalibur are functionally decomposable: each joint plays a particular role in the whole movement of the arm, as we described the previous section. A Behavioral Approach for Open Robot Systems 39 This structural decomposition is highly flexible for further extension and modification. We can install Excalibur on a mobile B A S E without changing any other modules except R O B O T . Or we can use a dextrous hand instead of the gripper without modifying any mod-ules of the arm and the joints. The mapping of the structural decomposition into the physical network of transputers, however, depends on the interface of robot devices and transputers. We can make the mapping as the final step in the design process of a robot control system (by using multiplexing etc.). But for the consideration of efficiency, we need to take this problem into account at the stage of decomposition. There is no general approach of mapping modules into transputers. However, there are some basic heuristics: • It is better to map modules which have shared memory communication into one trans-puter; • It is better to map modules which share communication channels onto transputers which have direct hardware links; • It is better to distribute computations with loose dependence over several transputers. • It should also be taken into account that modules on one transputer should not have more than four finks connecting to the modules distributed on other transputers because each transputer has only four hard links. One of the easiest ways of connecting the robot to the transputer system is to use one fink of a transputer to communicate with the robot controller through RS232 board (see Figure 4.3). Based on this configuration, we designed the physical connection of the transputers and A Behavioral Approach for Open Robot Systems 40 Transputer RS232 i r Computer Circuits Controller Figure 4.3: Current Configuration of Transputer and Controller the mapping of the modules to the transputers (see Figure 4.4). In Figure 4.4, we have four transputers connected as a ring, on each of which, there is one integrated module corresponding to one or several logical modules in Figure 4.2. The transputer with SERVO on it has a link directly connected to (the computer of) the robot controller. SERVO is responsible for issuing commands to and receiving the current joint angles and the touch state from the robot controller. JOINT, consisting of six joint modules, from SWING to TOOL, is mainly for computing the joint movements according to the desired position from ARM and the current joint angles received from SERVO. ARM receives the commands from ROBOT, determines the current goal position or orientation and sends them to JOINT. ROBOT, at the highest level, is the module for performing particular tasks. All the inverse kinematics is done in JOINT , which will be described in the module design stage. A Behavioral Approach for Open Robot Systems 41 ROBOT A R M JOINT SERVO Figure 4.4: Transputer-based Robot System 4.4.2 The interface of each module The interface for a module consists of its port descriptions and communication protocols. Only modules with consistent protocols can be connected together. In the rest of this section, we describe the interface of each module in Figure 4.4, and the connections between those modules. ROBOT module ROBOT has four ports (see Figure 4.5), whose descriptions are as follows: • portl is an output port, for sending out the commands such as "move forward", "turn left", etc. which can be handled by ARM module ; • port2 is an input port, for getting the current position state of the arm's end-effector; A Behavioral Approach for Open Robot Systems 42 • port3 is an output port, for sending out the desired gripper operations such as "open" and "close"; • port4 is an input port, for getting the current state of the touch sensor. ROBOT has two task-independent processes, the permanent processes which are created once ROBOT is initialized. These processes are considered as information processors of the environ-ment, which continuously send out the state or the change of the environment. Such processes are critical for real-time reactive mechanisms. In the current design, w e have one permanent process for getting the current end-effector position state from port2, and the other for getting the touch state from port4. The commands sent out through portl or port3 are determined by the particular tasks. Currently, we have designed two sequential tasks for this robot: "find and grasp an object" and "release the object". During the process of "find and grasp an object", ROBOT sends out the commands "hand forward", "arm forward", "gripper open" and "gripper close" constantly. If the gripper does not touch anything when the arm reaches its limit position, ROBOT sends out commands "arm backward" and "arm turn", then repeats the "find and grasp an object" process; otherwise, this process stops, and the process for "release object" comes into control; During the "release the object" phase, ROBOT sends out the commands "hand down", "arm down", and finally sends the command "gripper open", when a signal comes from the terminal (the gripper is touching the ground). ARM module ARM has four ports (see Figure 4.6), whose descriptions are as follows: A Behavioral Approach for Open Robot Systems 43 R O B O T 1—1 1—1 L i J i f i L J J r i portl port2 port3 port4 Figure 4.5: Interface of R O B O T Module • portl is an input port, for getting the commands such as "arm forward", "arm turn", "hand down" etc. which can be processed by ARM module; • port2 is an output port, for sending out the arm's current end-effector position state; • port3 is an output port, for sending out the desired end-effector position or orientation; • port4 is an input port, for getting the current end-effector position and orientation. ARM gets commands from portl and calculates the desired end-effector goal position or orientation and sends the results out through port3. At the same time, ARM continuously reads the current end-effector position and orientation from port4 and sends out the current end-effector position state through port2. JOINT module JOINT has four ports (see Figure 4.7), whose descriptions are as follows: • portl is an input port, for getting desired end-effector position or orientation; • port2 is an output port, for sending out the current end-effector position and orientation; A Behavioral Approach for Open Robot Systems 44 portl port2 ' r i l _ l A R M 1 — 1 L port3 port4 Figure 4.6: Interface of A R M • port3 is an output port, for sending out the desired movements of each joint; • port4 is an input port, for getting the current joint angles. J O I N T constantly reads the current joint angles from port4, calculates the arm's current end-effector position and orientation, sends the result out through port2. At the same time, J O I N T gets the desired end-effector goal position or orientation from port l , and calculates each joint's desired movement, and sends the result out through port3. portl port2 1 i ' r i 1 1 1 JOINT 1 — 1 L r i port3 port4 Figure 4.7: Interface of JOINT module A Behavioral Approach for Open Robot Systems 45 S E R V O module S E R V O has four ports (see Figure 4.8), whose descriptions are as follows: • portl is an input port, for getting the desired movements of each joint; • port2 is an output port, for sending out the current joint angles; • port3 is an input port, for getting the commands of gripper operations. • port4 is an output port, for sending out the current state of the touch sensor. S E R V O has a process for serializing the commands to the controller and getting the results from the controller. Besides, S E R V O has several processes for organizing the commands for motion, the commands for getting the current joint angles, the commands for operating the gripper, as well as the commands for getting the current touch state. portl port2 port3 port4 r 1 1 ' r l _ J L _ J SERVO Figure 4.8: Interface of SERVO Module 4.4.3 Constructing the System from Behavioral Modules The connection between different modules, in fact, is straightforward (See Figure 4.9). We just connect the output ports of one module to the input ports of another module, if the port descriptions of both are consistent. A Behavioral Approach for Open Robot Systems ROBOT n n 4 L_J ARM r - i 3 L i 1 1 JOINT i - i 2 LXL SERVO Figure 4.9: Connection between Different A Behavioral Approach for Open Robot Systems 47 The result of the whole system is totally distributed. There is no central control over these four transputers. Each of the transputers starts working regardless of the functioning of the others. If a behavioral module (on a transputer) stops working the rest of the system will still continue functioning. The theme behind this is: any individual module only affects the whole system locally. There is a hierarchical coordination between different parts of the system. At the highest level ROBOT coordinates the operations carried out by lower level modules HAND and ARM through sensors (whenever the gripper touchs something, the arm stops moving forward, instead, the hand starts turning downward, then the arm starts moving down towards ground); besides, the gripper is coordinated with the vision sensor or the range sensor (currently a signal from the terminal) to fulfill the task of releasing objects. ARM would be the coordinator of the six joints if we put these joints into different trans-puters. We put them into one transputer because they communicate with one another through shared memory. We did not put ARM and JOINTS into one transputer, because the compu-tations of these two modules are relatively independent, thus we can explore parallelism with little communication overload. The operation of the gripper in the current system is very simple. In order to accomplish the gripper operation ROBOT can issue two commands, "open" and "close", to SERVO module which passes them to the robot controller. There are two disadvantages of the current configuration: • the communication with the robot controller becomes a bottle neck, since all the com-mands have to be serialized by the computer within the controller, and executed one by one; A Behavioral Approach for Open Robot Systems 48 • the controller provides only one kind of point to point movement, which causes the arm stops between segments of the movement if we decompose the whole motion into a se-quence of "moveby" commands. 4.5 An Alternative Configuration The disadvantages of our current system mentioned in the previous section are due to the fact that all the commands issued from a transputer to the controller are serialized by an interfacing computer within the controller (see Figure 4.3). This connection guarantees that the robot carries out the previous command before accepting the next one. However we want to get rid of this property in our system because of two reasons. Firstly we want the gripper opens and closes constantly during the arm is moving. Secondly we prefer that the arm can move continuously. In order to solve this problem, we can use the control cable of the master arm which are originally designed for sending out the analog signals of the joint's positions of the master arm during manual control. We can design a D/A convertor to convert the digital position signals from the transputer into analog signals, and an A/D convertor to convert the analog signals from the cable into digital signals. With the help of these two convertors, The transputer can directly communicate with the physical control part of the arm (see Figure 4.10 where the box in the middle is for A/D and D/A convertors. The cable from that box is plugged in to the manual slot of that controller). In this configuration, the interfacing transputer has two permanent processes. One is constantly sending out the desired joint angles and gripper operations at some rate to the D/A convertor and the other is constantly getting the current joint angles and touch state from the A/D convertor. The advantages of this configuration are that we can get rid of the digital part of the A Behavioral Approach for Open Robot Systems 49 desired joint angles current DIGITAL Computer A N A L O G Circuits Controller Figure 4.10: Alternative Configuration of Transputer and Controller A Behavioral Approach for Open Robot Systems 50 controller w h i c h forces the commands to be executed sequentially, and we can design our o w n distr ibuted posi t ion contro l a l g o r i t h m to achieve a better performance of m o t i o n . Consider ing the efficiency of the l ine transmission between the a r m and the transputer net-work, we can distr ibute the joint modules i n t o two transputers since the amount of computat ion is increased i n a jo int module when we bypass the controller to design our o w n posi t ion control . T h e behavioral modules and their connection for this configuration is shown i n F igure 4.11. ROBOT SERVO F i g u r e 4.11: A l t e r n a t i v e Conf igurat ion of Behaviora l Modules Chapter 5 Programming a Real—Time Multi—Sensory Robot System in Parallel C++ Programs, of one sort or another, underlie everything that happens. — Geoff Simons "Is M a n A Robot?" John Wiley k Sons, Page 133 In this chapter, we first present Parallel C++, a parallel distributed object oriented language implemented on transputers, then illustrate how to use the language to program a real-time, multi-sensory robot system. Section 1 gives a brief introduction to Parallel C++; Section 2 discusses some useful constructs of parallel C++ for programming distributed real-time systems; Section 3 gives the design and implementation of the modules of our robot control system discussed in the previous chapter; Section 4 discusses the current performance and approaches for further improvement. 51 A Behavioral Approach for Open Robot Systems 52 5.1 Brief Introduction to Parallel C-f-+ C++ [Str86] is an upward compatible superset of C that provides object oriented facilities. Parallel C [Moc88] from Logical Systems is an extension of C which supports parallel and distributed processing on the transputer network. We have implemented Parallel C++ [ZQ89], a combination of Parallel C and C++, on the transputer network. Parallel C++ not only supports both object oriented methodology and parallel distributed programming techniques, but also introduces new concepts for modeling and constructing the complex systems. It is not only an expressive and effective language, but also a practical one for applications in a parallel and distributed environment [ZQ89]. In this section we give a brief overview of the main features of C++ and Parallel C, then introduce the programming language Parallel C++. In addition to the facilities of C, C++ provides a flexible and efficient mechanism for defin-ing new types — classes. A class is a type of a set of objects composed of two parts: private components and public components. An object is an instance of some class. The public com-ponents of an object are the only interface through which the object is manipulated. The class mechanism encourages data hiding and enhances modularity. The form of the class specification is: class name { private components public: public components } where the private components can be data items and functions that are local to the objects of the class; the public components can be data items, various constructors for initializing an instance (object) of the class, and destructors for disposing of an object of that class. Besides, A Behavioral Approach for Open Robot Systems 53 there may be a number of functions or operators defined on the data items. Parallel C from Logical Systems provides facilities to support the concurrency features of the transputer [Moc88]. The facilities are implemented as a set of library functions that allow the user to specify the definitions, communication and synchronization of concurrent processes. Parallel C provides three new data types: Process, Channel, and Semaphore as well as the library functions for allocating the processes (ProcAlloc), for executing the parallel processes (ProcRun, ProcPar etc.), for process communication through channels (Chanln, ChanOut etc.) and for process synchronization through semaphores. The syntax for process definition is the same as that for functions except that the first parameter of a process function must be specified as a dummy variable of Process type (see the following examples). A process can be created by invoking function ProcAlloc. Communication among processes through channels is performed by invoking functions like Chanln or ChanOut. The implementation of Parallel C++ is quite simple. We compile the source files in C++ library to assembly codes of transputers which can be link to the user programs as a part of the concurrency library of Parallel C. We accomplish this by translating C++ codes into C codes via the C++ preprocessor and compiling the resultant C codes by the Parallel C compiler. We have changed some include files of parallel C to make them compatible with C++ preprocessor. A user's program written in Parallel C++ is first preprocessed by the C++ preprocessor and then compiled by the Parallel C compiler. Finally the compiled codes are linked with both Parallel C library and C++ library (which is already compiled into the codes compatible to Parallel C). This integration is clear and powerful. There is no syntactic conflicts nor semantic confusion. It is an upward compatible superset of both C++ and Parallel C. The following are some A Behavioral Approach for Open Robot Systems 54 examples of the use of this combination: • Class variables can be used as the parameters of a (process) function. • Class variables, as well as the member functions, operators and constructors of classes can be used in the body of a process. • Any functions supplied by Parallel C, such as those for channel and process allocation, process creation and communication, can be called in the definitions of the member func-tions, constructors, and operators of a class. • The data types supported by Parallel C, such as Process and Channel, can be used in the same way as any other data types of C++ in a class specification. Note however that processes cannot be defined as member functions of a class. This restriction results from the inconsistency of the current version of C++ preprocessor and the Parallel C compiler.1 5.2 Constructs in Parallel C++ Two kinds of objects can be defined in Parallel C++: passive objects and active objects. The former is the model of data structures; the latter is the model of devices. A passive object is composed of private data and public operations defined on the data. It is called passive object because it is initialized as a chunk of memory without control threads. 1 Parallel C requires that the first parameter of a process definition be a dummy variable of Process type, but C++ preprocessor translates the member function of a class into a function with a special variable this as its first parameter. This inconsistency results in the awkward way to define active objects as we will see later. However a slight change of the Parallel C compiler or the C++ preprocessor will get rid of this problem. So the problem is not considered as a syntax conflict. A Behavioral Approach for Open Robot Systems 55 The interface for a passive object is the public functions or operators whose implementational details are transparent to the user of the object. As an example of passive objects, the rotation matrix in the robot arm kinematics can be represented as a class, frame(see Figure 5.1). Here class frame is composed of three vectors for representing each axis of the frame. Also there are two ways of initializing the instances of that class: we can initialize an instance with three vectors, or simply declare an object of frame class without initializing any data. Besides, we defined one operator for multiplying a frame by another, which corresponds to the transformation of one frame into another. We can incrementally add more functions of these kinds into that class. Class frame is easy to use (see Figure 5.2). The use is the same as the use of any other data types in the program. Here we just need to declare the frame objects, and to apply the defined functions or operations on them. An active object, also called a concurrent object or a parallel object, consists of private data items including channels. It is called active object because it is initialized as a set of processes which actively communicate with each other or communicate with other objects, through channels or via shared memory. The interface or external behavior of an active object is defined by its communication convention, i.e., its port description, communication protocol etc. An active object can be considered as a module with a set of connections (see Figure 5.3). Any two active objects can be connected together as long as they have a consistent communication convention. As an example, consider the simple producer and consumer problem. The definitions of the two communicative processes are shown in Figure 5.4. Process producer continuously produces the data, and process consumer continuously consumes the data. Two processes A Behavioral Approach for Open Robot Systems I if if * * If if if if if If if If * if 4c if % if if * if * * * * * * * * if if if if * * * * if if if if if if / class frame { vector3 x; vector3 y; vector3 z; /* matrix i s composed of three vectors */ public: frame (vector3,vector3,vector3); f rameO; friend frame operator*(frame, frame); } frame::frame(vector3 v l , vector3 v2, vector v3) •C } frame::frame() O frame operator*(frame A, frame B) •C frame C; /* define the matrices multiplication */ return(C); } /********** ********************************/ Figure 5.1: The Head Declaration of Class frame A Behavioral Approach for Open Robot Systems 57 /****************************************/ frame A, B, C; C • A*B; Figure 5.2: The Use of Class frame l i r Active Object i r Figure 5.3: Active Object communicate through a channel. These two processes can be defined as two classes of active objects, Proc and Cons as shown in Figure 5.5, corresponding to processes producer and consumer respectively. From the programs (see Figure 5.4 and 5.5) we can see that the use of the latter is easier and clearer than the use of the former. Active objects can be used to model any logical or physical devices which are independent, but communicating with other devices through designed ports. A system consisting of multiple devices can be modeled as a program with a large set of active objects devices; the communications between active objects corresponds to the communications between different devices. Active objects are used here as the major constructs for programming a robot system with multiple sensors and manipulators. The language augmented with these constructs is powerful for representing any complex and distributed systems. In fact, it is straightforward to map Brooks' AFSM [Bro89], Augmented Finite State Machine, into our language, which will be A Behavioral Approach for Open Robot Systems /*****The definition of processes************/ producer(Process *p, Channel *ch) •C int x; for(;;H x = proc(); /* produce data x */ ChanOutlnt(ch.x); /* send out the data through the channel */ } } consumer(Process *p, Channel *ch) •C int x; for (;;) { x = Chanlnlnt(ch); /* get data from the channel cons(x); /* use data x } } / * * * * * * * * * T h 6 use of processes***************/ initO •C Process pl , p 2 ; Channel ch; ch = ChanAllocO; /* initializing the soft channel */ pi = ProcAlloc(producer, stack, 1, ch); p2 = ProcAlloc(consumer, stack, 1, ch); /* create the two processes */ ProcRun(pi); ProcRun(p2); /* running these processes in parallel */ } /************************^^ Figure 5.4: Communicated Processes Producer and Consumer */ */ A Behavioral Approach for Open Robot Systems I************###c]_ags Proc*******************/ class Proc { Channel *ch; public: Proc(Channel * ) ; } Proc::Proc(Channel *ch) { Process *p; p = ProcAlloc(producer, stack, 1, ch); ProcRun(p); } / * * * : ( c * * * * * * * ! ( c*ciass Cons*********************/ class Cons { Channel *ch; public: Cons(Channel * ) ; } Cons::Cons(Channel *ch) •C Process *p; p = ProcAlloc(consumer, stack, 1, ch); ProcRun(p); } /**********mijse of Proc and Cons************/ i n i t ( ) { Channel *ch; ch = ChanAllocO; Proc(ch);Cons(ch); } / ^ ^ l l t i l l ^ ^ i l c i l l l l c i l c i l l * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / Figure 5.5: Active Objects Producer and Consumer A Behavioral Approach for Open Robot Systems 60 described Appendix B . 5.3 Module Design Module Design is the stage of implementing the desired behavioral specification of each module. Module design involves the details of the logical or physical devices related to individual module. A module can consist of a set of sub-modules. An atomic module is an active object with a set of cooperative processes or simply one process (such as the producer and consumer case). For real-time multi-sensory robot systems, there are two kinds of processes: permanent processes and task-related processes. Permanent processes deal with task-independent sensory processing and updating, which are very important for the reactive mechanisms. Task-related processes are usually created to perform a particular task. The coordination between different processes makes it possible to achieve several goals in parallel. The basic procedure for module design is Figure 5.6: A Behavioral Chart to decompose the behavioral module into a set of active objects (or simply processes), as well as the coordination between these objects (processes). A behavioral chart (see Figure 5.6) helps for building this communicative process structure. The coding is straightforward — simply by implementing each box in the behavioral chart as an active object or a process. The wires A Behavioral Approach for Open Robot Systems 61 between them are communication channels. In rest of this section, we describe the module design for each of the the modules in figure 4.4. 5.3.1 SERVO Module SERVO module has five ports, one of them connected to the robot controller through RS232 (see Figure 5.7). SERVO module is performing the low level coordination between different commands through the robot controller. There are two kinds of commands: one is for organizing actions such as "moveby", "open", "close", etc. and the other is for getting sensory inputs such as "manip", "inp" etc. On the other hand, we can classify the commands into two categories, one is about the arm and the other is about the gripper. SERVO module is decomposed into three communicative processes (see Figure 5.7). Pro-cess servo_process serializes all the commands through the hard link, using a set of multi-plexing channels. Process gripper_process coordinates the commands between the gripper operations and the touch sensor. Process arm_process coordinates the commands of getting current joint movements from portl, then issuing them to servo_process, and getting current joint angles from servo.process, then sending them out through port2. The principle for setting the coordination is to guarantee that the action is always based on the current sensory information. All these three processes are permanent processes, as they are created once the SERVO module is created. The code of this module is straightforward from this chart, which is omitted here. 5.3.2 JOINT Module A Behavioral Approach for Open Robot Systems 62 portl port2 port3 port4 Figure 5.7: SERVO Module The behavioral chart of J O I N T module is shown in Figure 5.8. J O I N T has six active objects as its sub-modules: SWING, S H O U L D E R , E L B O W , U P P E R , W R I S T and T OOL, modeling the six physical joints respectively; two shared data memories for the current and the desired position and orientation; and three parallel processes whose functions are as follows: • cur_pos is a process repeatedly reading the current joint angles from port4, calculating the current position and orientation, restoring them in a shared data memory, and sending the result out through port 2; • cur_mov is a process repeatedly receiving the joint movements from the six active objects then sending them out through port3; • cur_goal is a process repeatedly receiving the desired goal position or orientation from p o r t l , and updating them in a shared data memory. A Behavioral Approach for Open Robot Systems portl ^ cur_goal ^ out 4 — ^ elbow curmov J swing M > shoulder ) upper wrist h* > •4—^ tool ^ port2 in cur_pos 'goal positionJ port4 port3 Figure 5.8: JOINT Module A Behavioral Approach for Open Robot Systems 64 Each joint has two ports, an input port for synchronization signals, and an output port for sending out the incremental amount for the joint movement. Each joint repeatedly performs the following tasks: receives the synchronization signal from cur_pos — which means that the current position has been updated, calculates the desired movement according to the current and the desired position and orientation in the shared memory, and sends the result out. JOINT module involves two ways of process communication — message passing through channels and communicating via shared memory. Besides the explicit message passing shown in Figure 5.8, the current position and goal position are shared by the six joints and two processes: cur.pos and cur-goal. Each of the active objects for the joints has its private data storing the current orientation frame of the joint, and two functions for updating the current orientation frame and for calcu-lating the desired joint movement respectively. Each joint consists of two cooperative processes for the updating and calculating respectively. For example, class SHOULDER is defined as: class SHOULDER { frame F; public: SHOULDER(Channel *, Channel *); void cal_Dif(double *); void cal_F(); void ini_F(); }; where F is a kinematic frame of this joint, representing the current orientation of the joint; cal_F and iniJF are the functions for calculating and initializing the current frame respectively 2; caLDif is the function for computing the incremental amount for the joint movement based 2The reason that we need to initialize the joint frame is that the JOINT module begins its function inde-pendently. We should guarantee that the frame is set so that the parallel processes will not get into trouble with undefined data, before the first set of joint angles comes. A Behavioral Approach for Open Robot Systems 65 on the current goal, the current position and the current frame. We use iterative inverse kinematics — Functional Joint Control [P0088], rather than the closed form inverse kinematics for this calculation because: • by using FJC we can explore the potential parallelism. In this approach each joint cal-culates the joint angles independently. By mapping the joint modules into several trans-puters we can speed up the computation (for example, see Figure 4.11); • FJC encourages modularity, for we can module each joint independently; • many arms have no closed inverse kinematics solution or infinitely many of them (for a redundant arm), while iterative inverse kinematics still works for these arms; The algorithm for calculating the movement of the joint angles is similar to Poon's [P0088], except that we set every step of the motion less than a certain amount (for example, fifteen degrees). We did this for the following two purposes: • making the motion more uniform. In Poon's algorithm the motion at the beginning is fast and slows down with time for a small adjusting. However, we prefer to have a continuous uniform motion; • making a better use of sensory information during the movement. Because each step is small and the sensor information can affect each step, the change of the environment can be recognized within the small step of motion. We notice that we still can use Poon's algorithm for each small segment along the trajectory. However, in these case, the higher level modules have to set the goal position and orientation too frequently, which would in fact slow down the system. Our algorithm for the calculation A Behavioral Approach for Open Robot Systems 66 is described in Appendix C. We have no fine trajectory for this arm and no path planner or explicit calls to the inverse kinematics. What we have is an infinite loop, similar to the servo loop, for end-effector position and orientation control. The joints just constantly adjust themselves to achieve the current goal. The definitions of the two cooperative processes of SHOULDER, shoulder_process and process_cal_FJSh, are shown in Figure 5.9. After receiving a synchronization signal, pro-cess_cal_F_Sh updates its current frame (by calling cal_F), and then sends the ready signal out. After receiving the ready signal, should_process calculates its desired movement (by calling caLDif which uses the algorithm we describe in Appendix C), and then sends it out. The initialization of the module SHOULDER is done by creating and executing these two processes. The SHOULDER module and the corresponding codes are shown in Figure 5.10 and Figure 5.11 respectively. Other joints are designed and implemented similarly. The initialization of the JOINT module is done by just initializing the six joints and three processes. The corresponding Parallel C++ codes are shown in Figure 5.12. JOINT module is also a task-independent module. Even though the set of the current goal is task-related, the incremental movement of each joint (probably 0 if the goal has been achieved) is continuously calculated and sent out through the respective port, once the module is initialized. The resultant behavior for these six joints is (refer to Figure 5.8). Each time after receiving current joint angles, process cur_pro calculates the current position and orientation of the end-effector using forward kinematics, which updates the shared memory, sends the result to port2 and sends a synchronization signal to each joint module. This signal triggers each joint to calculate the current frame, then to calculate its incremental movement. Process cur_mov A Behavioral Approach for Open Robot Systems 67 shoulder.process(Process *p, SHOULDER sh, Channel * i n , Channel *out) •C double an; for(; ; ) { Chanlntlnt(in); /* waiting f o r ready signal */ 8h->cal_Dif(&an); /* ca l c u l a t i n g the desired movement */ ChanOutDouble(out,an); > > proces8_cal_F_Sh(Process *p, SHOULDER sh, Channel * i n , Channel *out) { for(; ; ) { Chanlnlnt(in); /* waiting f o r signal */ sh->cal_F(); /* ca l c u l a t i n g the current frame */ ChanOutInt(out,1); /* sending out ready signal */ } } /*****^***************^ Figure 5.9: The Parallel C++ Codes for the Two Processes of SHOULDER collects these movements from six joint modules, and sends them to port3 to drive each joint to move, so that the end-effector will be one step closer to the desired goal. The setting of the goal position is asynchronous with the movement calculation. However, when the desired goal is reset by ARM, the movement is automatically retargeted to this new goal. At any time, each joint calculates the movement according to the current end-effector position and orientation, as well as the current desired end-effector position and orientation. A Behavioral Approach for Open Robot Systems 68 SINGAL J process J_ 1 Cal_F_Sh J" shoulder process } JOINT'S MOVEMENT Figure 5.10: SHOULDER Module SHOULDER::SHOULDER(Channel *in, Channel *sh_rot) < Process *pl, *p2; Channel fchan; this->ini_F(); chan » ChanAllocO; pi = ProcAlloc(shoulder_process, 1000, 3, this, chan, sh.rot); p2 = ProcAlloc(process_cal_F_Sh, 1000, 3, this, in, chan); ProcRun(pl); ProcRun(p2); } Figure 5.11: The Parallel C++ Codes for Module SHOULDER A Behavioral Approach for Open Robot Systems 69 JOINT::JOINT(Channel * p o r t i , Channel *port2, Channel *port3, Channel *port4) { Channel *in[6] , *out[6]; Process * p l , *p2, *p3; for(i=0;i<6;i++H i n [ i ] = ChanAllocO; out[i] = ChanAllocO; } /* i n i t i a l i z i n g internal channels */ p i = ProcAlloc(cur_pos, stack.size, 3, i n , port2, port4); p2 = ProcAlloc(cur_mov, stack.size, 2, out, port3); p3 = ProcAlloc(cur.goal, stack.size, 1, p o r t l ) ; SWING ( i n [0] , out[0]); SHOULDER(in[1], out[1]); ELBOW(in[2], out[2]); UPPER(in[3], out[3]); WRISTFLEX(in[4], out[4]); TOOLS ( i n [5], out [5]); ProcRun(pl); ProcRun(p2); ProcRun(p3); } Figure 5.12: The Parallel C++ Codes for Module JOINT A Behavioral Approach for Open Robot Systems 70 5.3.3 A R M Module ARM is on an intermediate layer, whose main function is to interpret the command from ROBOT module, and send the desired goal position or orientation of the end-effector to JOINT module. There are two kinds of commands from ROBOT: one is related to position and the other is related to orientation. All these commands are about continuous motion (e.g. "movejbrward" command does not specify an absolute position; its semantics is to cause the arm moving forward) which can be stopped by using stop command. Position Commands Position commands include arm_forward, arm-backward, arm.up, arm_down and arm_turn (see Figure 5.13). Z A Z • (x_new, U-Pew, Z_new) (x,y,z) (X—new, y_pew, Z_new) >> Y (x,y,z) X X amxjorward Figure 5.13: Position Commands • arm-forward (arm-backward) — the arm continuously moves forward (backward) without changing the orientation of the hand. ARM responses this command by setting the desired position to the forward (backward) position on the current plane of shoulder A Behavioral Approach for Open Robot Systems 71 and elbow. position fa fired — ( i n e t u i Vnewy ^netu) %neui = —/ o . ? * lenforward(.lenbackward) y/Xz + yl where a; and y are the current position coordinates. • arm.up (arm_down) — the arm continuously moves up (down) without changing the orientation of the hand. ARM responses this command by setting the desired position to the highest (lowest) position on the current plane of shoulder and elbow, i.e. • arm-turn — the swing continuously turns an angle a. ARM responses this command by setting both the desired position and orientation according to the forward kinematics3. Orientation Commands Orientation commands include hand-forward, hand-up, hand-down, hand_right, handJeft and hand-turn. The first five commands are related to the approach vector of the hand — a; the last command is related to the normal vector of the hand — n (see Figure 5.14). • hand-forward: the hand continuously turns to the forward position. ARM responses this command by setting a to (a;, y, 0) where x and y are the current position coordinates. 3 Readers may ask: Why did we not simply send a command which issues the swing to move an angle, instead of doing so much computation here? The answer is: we have to set the goal to new position and orientation, otherwise, the stupid JOINT module will try to move back to achieve its "goal", so that the arm can't go anywhere. A Behavioral Approach for Open Robot Systems 72 • hand_up(hand_down): the hand continuously turns to up (down) position. ARM responses this command by setting a to (0,0,1) ( (0,0,-1) ). • handJeft(handjdown): the hand continuously turns left (right). This command is carried out by setting a to (—y, x, 0) ((y, — x, 0)) where x and y are the current position coordinates. • hand_turn: the tool continuously turns an angle a . ARM responses this command by setting n to new goal, (n'x, n'y, n'z): n'x — nx cos(a) + nx sin(a) n'y = ny cos(a) + ny sin(a) n'z = nz cos(a) + nz sin(a) hand_Jbrward hand_up handjeft Figure 5.14: Orientation Commands A Behavioral Approach for Open Robot Systems 73 Stop Command All the commands for the arm are about continuous movement as we described previously. However we can stop the current movement by setting a new goal before the current goal is totally fulfilled. The command stop is only one of the examples of this idea. ARM responses this command by setting the goal to the current position and orientation of the end-effector. It is obvious that the arm stops moving, since the "goal" has been "achieved". ARM is implemented as an active composed of two permanent processes: processes com-mand and cur_pos. Process command constantly gets the commands from ROBOT, calcu-lates the desired goal, and sends the goal to JOINT. Process cur_pos continuously gets the current position and orientation from JOINT module, and updates old one in the current data memory. These two processes are running in parallel and communicating through the shared memory: cur.pos updates the current position and orientation which command uses. 5.3.4 ROBOT Module In the design of the ROBOT module, we are facing the problems of how to encode the task into a set of cooperated processes. We have described the task and ports of ROBOT module in Section 4 of the last chapter. ROBOT is Implemented as an active object consisting of two permanent processes, cur_touch and cur_position, respectively for getting current touch state and current position state con-stantly. Besides, there are some task-related procedures running in parallel with and com-municating with the permanent processes for getting the information about the environment. Currently, we have two procedures, findObject and releaseObject, for detecting objects releasing the objects respectively. A Behavioral Approach for Open Robot Systems 74 touch posiuon ( Hand" \ ^ forward J _E ^^open_close^\ "*\^ process JT Kann_fomard*\ process j •^arm_stop ^ backward ^  ^ process ^ / ^ v turning process Figure 5.15: Behavioral Chart for findObject findObject task can be decomposed into a set of sequential processes (see Figure 5.15 where bond arrows represent the control flow) — hand_forward, search-process, back-ward-process and turning-process. At the beginning, the hand directs forward, which makes the right direction for search; search-process is composed of two parallel actions: arm moves forward, and gripper opens and closes constantly. This process stops when the arm has arrived the forward position or the gripper touches an object when it closes. If the gripper touches an object, the arm stops moving and the gripper stops opening, the findObject pro-cess finishes; otherwise, the arm moves back and turns a small degree and begins to search along the new direction. releaseObject task (shown in Figure 5.16 where the bond arrows represent the control flow) begins with the hand directs downward. Afterwards, two processes runs in parallel: arm moves down and checks that whether there is a signal from the terminal which indicates that it is the right time to release the object. When such a signal is received, the gripper opens, so that the object is released. A Behavioral Approach for Open Robot Systems 75 Figure 5.16: Behavioral Chart for releaseObject 5.4 Current Performance and Further Improvement After we down-load each module to the transputer network connected as shown in Figure 4.4, modules start working independently and communicating with each other through the channels. The initial position of each joint is set to zero, then the hand starts turning forward and the arm moves forward with the gripper opening and closing at times (see Figure 5.17). If there is no object along this line, the arm comes back and turns a small degree and repeats the task (see Figure 5.18). If the gripper touches something, the arm stops moving forward immediately and the hand starts turning downward (see Figure 5.19). Afterwards, the arm moves down until a signal of touching ground comes from the terminal, which immediately triggers the gripper to open (see Figure 5.20). From the experiment, we see that the sensory feedback works extremely well. The system A Behavioral Approach for Open Robot Systems 76 is quite successful in the aspect of reaction to the environment. We believe that we can obtain this reactivity for the robots with more sensors, as long as we distribute the computation appropriately and arrange the communications in relatively short pathes. The problem with the current system is that the motion is not smooth. Since we use FJC to get the iterative movement of each joint, the controller cannot make the motion continuously (without stop) between the two segments of the movement. One way to solve the problem is re-write the controller. However, this still cannot solve the problem of sequential execution of the commands (the operation of the gripper and motion of the joints cannot be executed in parallel). We will take the second approach — using the master arm's control cable to send continuous signals to the manipulator which was discussed in the previous chapter. Under this new configuration, hopefully, we will get a better performance. Even though our system is designed for multiple sensors, we now only have joint angle sensor and one-switch touch sensor. We are planning to install more sensors so that the robot can perform more interesting and realistic work. Tasks like "cut off branches" will be designed for the robots with suitable proximity sensors or more complex touch sensors. A Behavioral Approach for Open Robot Systems ROBOT hand-foTw, ARM JOINT SERVO ROBOT arm forward z ARM new orientation JOINT SERVO ROBOT "5" ARM new j position! JOINT open close touch state SERVO Initial Position Z • hand-forward arm forward Figure 5.17: Phase 1: Robot Begin to Search for A Behavioral Approach for Open Robot Systems ROBOT arm backward ARM JOINT SERVO arm forward ROBOT arm turn A R M new position JOINT SERVO Z A arm backward Y ROBOT ARM new position orientation1 E JOINT SERVO arm turn +• Y Figure 5.18: Phase 2: No Object Found Along that Direction A Behavioral Approach for Open Robot Systems ROBOT arm forwari ARM JOINT open close touch state SERVO begin to search along new direction ROBOT ARM new position JOINT open close touch state SERVO Z A arm forward arm stop ROBOT ARM JOINT touched SERVO gripper touched something Figure 5.19: Phase 3: Robot Begins to Search in Another Direction A Behavioral Approach for Open Robot Systems 80 hand do ROBOT  A R M JOINT SERVO Z K Y begin to release the object arm down ROBOT Z A R M new J orientation! JOINT SERVO Z • hand down ROBOT ARM new position JOINT open SERVO Z A arm down • Y Figure 5.20: Phase 4: Robot Grasps a Tree and Puts it Down Chapter 6 Conclusions and Further Research Only experiments with real Creatures in real worlds can answer the natural doubts about our approach. Time will tell. — Rodney Brooks "Intelligence Without Representation" 6.1 Conclusions It has been shown to be possible to control a robot system by means of a parallel distributed object oriented language, and to implement that control on a transputer-based distributed computing environment. We proposed a structural decomposition, decomposing a system into a structural hierarchy according to the physical or logical components of the system, in addition to a behavioral hierar-chy (as described by Brooks). The advantages of two-dimensional, rather than one dimensional system design and development are: • Similar to the idea of structural programming, by structural decomposition, one can design a system with the description of behavior from coarse to fine. Such a decomposition is extremely important from a system design and development point of view, especially when 81 A Behavioral Approach for Open Robot Systems 82 the structure of the system becomes complicated. • With structural decomposition, one can represent different layers of coordination. The system with this design can be more robust. Any module only affects the system locally. Even though higher level coordination may stop working, lower coordination can still keep functioning. We implemented this two-dimensional structure using a parallel distributed object oriented language — Parallel C++. It has been shown that an active object (parallel object) can be used for modeling various kinds of logical and physical devices. There is no distinction between software and hardware design at this point. The function of an active object in a complex system is like a VLSI chip in a large electronic circuit. We further implemented a prototype of such a system in a transputer-based computing environment. It has been shown that such an environment is suitable for the development real-time, multi-sensory and multi-manipulator robot systems. We used iterative inverse-kinematics in the form of Functional Joint Control. The behavior of the arm has these properties: • The motion of the arm is not described as a point-to-point motion rather as a continuous movement which can be stopped by sensory information at any time. • There is no fine trajectory planner. The arm is set to always achieve the current goal as far as it can. For example, arm_forward does not necessarily moves along a straight line, but moves as far as it can and can be stopped by a sensory input at any time (for example, seeing an obstacle). Functional Joint Control can describe such behaviors directly and clearly. A Behavioral Approach for Open Robot Systems 84 the coordination between the task decomposition and the built-in reaction to the unpredicted situations can be represented and formalized. This formalization is intended for the exploration of the underlying mechanisms of behaviors in various "living" systems. It provides a theoretical framework for synthetic behaviors. The languages which are mostly related to the framework of Concurrent Constraint Logic are FCP and STRAND. FCP, Flat Concurrent Prolog developed by Shapiro [Sha87] is an expressive language for parallel processing. However, up to now there has been no implementa-tion in a transputer-based environment. STRAND is a parallel logic programming language with new concepts in parallel programming [FT89], supported for various parallel environments % including transputers and hypercube machines. We would like to use STRAND as the basic language for implementing and testing our theoretical framework. There is still a long way to go to achieve our goal. However, the dream of yesterday, is the hope of today, and the reality of tomorrow. A Behavioral Approach tor Open Robot Systems 84 the coordination between the task decomposition and the built-in reaction to the unpredicted situations can be represented and formalized. This formalization is intended for the exploration of the underlying mechanisms of behaviors in various "living" systems. It provides a theoretical framework for synthetic behaviors. The languages which are mostly related to the framework of Concurrent Constraint Logic are FCP and STRAND. FCP, Flat Concurrent Prolog developed by Shapiro [Sha87] is an expressive language for parallel processing. However, up to now there has been no implementa-tion in a transputer-based environment. STRAND is a parallel logic programming language with new concepts in parallel programming [FT89], supported for various parallel environments including transputers and hypercube machines. We would like to use STRAND as the basic language for implementing and testing our theoretical framework. There is still a long way to go to achieve our goal. However, the dream of yesterday, is the hope of today, and the reality of tomorrow. Appendix A F C P + H A n Object Oriented Flat Concurrent Pro log A.1 Brief Introduction to FCP++ FCP— Flat Concurrent Prolog is a logic programming language which replaces the backtracking search mechanism of Prolog with communicating concurrent processes [KM88]. It can represent incomplete messages, unification, direct broadcasting and concurrency synchronization, with the declarative semantics by a sound theorem prover [Sha87]. It is a good candidate as the language for Open Systems [KM88]. FCP++, a concurrent logic object oriented language based on FCP was designed and implemented [Zha88]. FCP-f-f- inherits all the capabilities of concurrent logic programming techniques supported by FCP and provides a set of new and powerful facilities for building object-oriented systems. The facilities include 1) class definition 2) function for making instance 3) function for halting instance 4) method definition 85 A Behavioral Approach for Open Robot Systems 86 5) message passing by commands 6) message passing by data through channels The programs in FCP++ can be translated into FCP by a preprocessor we implemented in Quintus Prolog. Following is the syntax for FCP++ Italics is used to indicate a part of a template which users are to fill in. The items of upper case italic are to be filled in with variables and the items of lower case italics are to be filled in with constants, and identifiers starting with a capital letter followed by some lower case letters can be filled in with any kind of terms. class definition class(name, [varl, var2,vam]) function for making instance className:m&ke([Valuel, Value2,..., Valuen], INSTANCE<:>sta.te(Id)) function for halting instance className: halt ([Valuel, Value2,..., Valuen]) className:halt([Valuel,Value2,...,Valuen])- > Body method definition classNamer.MessagePattern classNamev.MessagePattern— >Body A Behavioral Approach for Open Robot Systems 87 message passing through instance INSTANCEiMessagePattern message passing through communication channel var.> Message var< '.Message getting object identifier INSTANCE<:>ID A.2 F C P + + Program for Figure 3.4 Figure A.1 is the skeleton of the program for Figure 3.4. We can see that in this program, SENSORO continuously outputs the current conditions; PLANNER continuously generates partial plans; and EXECUTOR continuously executes such plans and causes the current conditions approaching to the goal conditions. All these processes are running in parallel. We did not write down the clauses for see.cond, planning and execute, which depend on the details of the system. A.3 F C P + + Program for Figure 3.6 The program in Figure A.2 is an implementation of the process structure shown in Figure 3.6. avoidObstacles is one of the permanent processes which are created whenever the object robot is created, moveto is the task designed for this robot. avoidObstacles and moveto are running in parallel. If there is no task running, this robot avoids any objects. If task A Behavioral Approach for Open Robot Systems achieve([Goal|NextGoal]) :-sensorO(Goal, Conditions), planner(Conditions, Goal, Plans), executor(Plans), achieve(NextGoal). sensorO(Goal, [done]):- see_cond(Goal, done). '/, current condition i s the goal i t s e l f sensorO(Goal, [Condi I Next]) :-see_cond(Goal, Condi), sensorO(Next). planner([done], Goal, [ ] ) . '/, i f the goal i s achieved, no plan i s generated planner([CondiNextCond], Goal, [PlanlNextPlan]) :-planning(Cond, Goal, Plan), planner(NextCond, Goal, NextPlan). '/, generate a p a r t i a l plan according to the current condition 7. and the Goal condition executor([]) . '/, i f there i s no plan, nothing i s executed. executor([Plan|Next]) :-execute(Plan), executor(Next). 7, execute the p a r t i a l plan. 7. during the execution, other sensory information 7. can be also used. Figure A.1: FCP++ Program for Figure 3.4 A Behavioral Approach for Open Robot Systems class(robot, [obstacleSignals, ] ) . '/, robot i s a class with one of the internal states as channel for '/, signals of detecting the current obstacles rob o t : : i n i t :-avoidObstacles(obstacleSignals). 7. the b u i l t - i n mechanism which makes the robot always '/, avoids the obstacles and sends out the signal about '/, the obstacle robot::moveto(Object) :-see(Object, Directions), move(Directions, obstacleSignals). 7, one of the task for this robot i s to move to the 7. described object. Certainly i t also should 7, notice the obstacle signals when f u l f i l l i n g this task see(Object, []) :- sensorSee(Object, done). 7, i f already moved to the object, no direction sent out see(Object, [Direl INextDires]) :-sensorSee(Object, D i r e l ) , see(Object, NextDires). 7. otherwise one direction sent out move([], _ ) . move([Dire|NextDires], [ObsINextObs]):-moveby(Dire, Obs), move(NextDires, NextObs). moveby(Dire, Obs) :- move_avoid(Dire, Obs). 7. make one step movement, of course shouldn't 7. move to the obstacle Figure A.2: FCP++ Program for Figure 3.6 A Behavioral Approach for Open Robot Systems 90 moveto is running, the robot moves to the specified object, and avoids the others. Appendix B Para l le l C + + for A F S M Brooks [Bro89] described semantics for the AFSM — Augmented Finite State Machine. How-ever, his representation only generates a simulated parallel program. The user has to explicitly write down the stop points of each process (using event-dispatch). We now show that we can write a more clear and truly parallel program for AFSM using Parallel C++. The definition of network of AFSM consists of two parts: the definitions of the AFSMs and the definitions for the wires which connect different AFSMs into a system. The AFSM mainly consists of a finite state machine and a set of input registers, output channels and internal variables. This machine can be reset by an outside signal at any time. It is obvious that an active object fits perfectly to an AFSM. We define the head file of an AFSM as in Figure B.l. The finite state machine corresponds to the case statement in C (see Figure B.2). There are two permanent processes for resetting the machine and restoring input signals to registers (see Figure B.3). Finally, the initialization of an AFSM is accomplished by initializing an active object of class AFSM (see Figure B.4). 91 A Behavioral Approach for Open Robot Systems class AFSM { int State; /* the state of the f i n i t e state machine */ int R[N]; /* registers */ Channel *out[M]; /* output channels */ public: void AFSM::AFSM(Channel *[N], Channel *[M], Channel *) void resetState(){State = -1}; void i n i t S t a t e O {State = 0}; void setState(int NewState){ i f (State == -1) State = else State = NewState;}; void setR(int i , i n t val){R[i] = val;} void machine(); Figure B.l: The Head File for AFSM void AFSM::machine() •c State = 0; for(;;) { switch State of case 0: ... /* any expressions including ChanOutlnt */ /* any variables i n private part can be used */ setState(NewState); break; case 1: ... setState(Newstate); break; Figure B.2: Finite State Machine in C A Behavioral Approach for Open Robot Systems i n t reset_process(Process *p, A^ FSM *A, Channel *reset) •C for(;;) { Chanlnlnt(reset); A->resetState(); } } i n t input_process(Process *p, AFSM *A, Channel *input[N]) •C i n t index, value; for(;;) { index = ProcAltList(input); value = Chanlnlnt(input[index]); A->setR(index,value); } } Figure B.3: Processes for Resetting the Machine and Setting Registers A Behavioral Approach for Open Robot Systems AFSM::AFSM(Channel *input[N], Channel *output[M], Channel *reset) •c Process p i , p 2 ; i n t i ; for(i=0; KM; i++) { outCi] = ChanAllocO; ou t [ i ] = output[i]; } /* i n i t i a l i z e out channels */ pi = ProcAlloc(reset_process, stack_size, 2 , t h i s , reset); p2 = ProcAlloc(input.process, stack_size, 2 , t h i s , input); ProcRun(pl); ProcRun (p2) ; t h i s -> machineO; } Figure B.4: Initialization of AFSM A Behavioral Approach for Open Robot Systems 95 suppress(Process *p, Channel * i n l , Channel *in2, Channel *out) { int index, value; for(;;) { index = ProcAlt(inl,in2,0); i f (index == 0) { value = Chanlnlnt(inl); ChanOutInt(out, value); ChanInIntTime(in2, time); } else { value = Chanlnlnt(in2); ChanOutInt(out, value); > } } Figure B.5: Suppress Function in Parallel C++ The A F S M consists of two permanent processes and a finite state machine which is also permanent. Similarly, we can have some process definitions corresponding to the definitions for the wires. There are three kinds of processes: suppress, inhibit and multiplex. The parameters for suppress process include two input channels and one output channel. One of the input channel can suppress the other for a period of time (see Figure B.5). The parameters for inhibit is the same as that for suppress. The parameters for multiplex process include one input channel and a set of output channels. The input from the input channel will be sent out to all the output channels (see Figure B.6). The function of inhibit is similar to that of suppress, except that no output comes out during the suppressing period. Each of the wire processes is a permanent process. The whole system is constructed by initializing all the channels, active objects (AFSM) and processes (wires connections) (see Figure B.7). Each box is an A F S M , A Behavioral Approach for Open Robot Systems multiplex(Process *p, Channel * i n , Channel *out[N]) •C for(;;) { val = Chanlnlnt(in); for(i=0;i<N;i++) ChanOutInt(out[i], v a l ) ; } } Figure B.6: Multiplex Function in Parallel C++ Figure B.7: AFSM system in Parallel C++ A Behavioral Approach for Open Robot Systems 97 each c irc le is a connect ion of wires. S stands for suppress; M stands for multiplex. Appendix C Funct iona l Joint Cont ro l Functional Joint Control proposed by Poon [P0088] is very powerful for a large class of manip-ulators which are functionally decomposable. In this thesis, we use the idea of FJC for the kinematics loop of the joint movement. We set the step of the movement almost equal during the whole motion, instead of using adaptive gain factors in Poon's algorithm. The reason of this change has been stated in Chapter 5. C l Projection Technique for Joints Controlling Orientation Suppose the function of joint i is to control the orientation of a vector, v. The target orientation is vj, and the current orientation is v (see Figure Cl) . For joint i, the current kinematics frame is T{, with axes Xi, Yi, Zi. vj and v can be projected onto the X — Y plane of TJ. Let the projections be Vpd and vp. vpxi = v.Xi vpyi = v.Yi 98 A Behavioral Approach for Open Robot Systems 99 Figure C.l: FJC Projection Technique All that joint i can do to help align v with vd is to align vp with v^. The angle between vp and vpd, A0„ is given by A9i = arctan(vpdyJvpdxi) - arctan(vpyi/vpXi), where vpXi stands for the x-component of vp on the frame of joint i, etc. We set a step of movement at most fifteen degrees, so that if A0i > 15, a,- = 15 otherwise a,- = A0; where a,- is the actual step movement. In Excalibur, swing and shoulder control the orientation of p; upper and wrist control the orientation of a; tool controls the orientation of n. A Behavioral Approach for Open Robot Systems 100 C.2 FJC for Joints Controlling Length The function of some joints is to alter a certain length, L (see Figure C.2). Figure C.2: FJC for Joints Controlling Length L2 = Li-!2 + Li2 - 2 Z t _ 1 £ , c o s ( 0 , ) , and Ld2 = Li.!2 + Li2 - 2Li-XLicos(edt) where X, is the length of link i, L is the current length, Ld is the desired length, and 0,- is angle between link i and i — 1. AOi = 6di - Oi Similarly, if A0i > 15, a,- = 15 otherwise a,- = A0i where a, is the actual step movement. In Excalibur, elbow is the joint for controlling the length. B ib l iography [BC86] Rodney A. Brooks and Jonathan H. Connell. Asynchronous distributed control sys-tem for a mobile robot. SPIE Mobile Robots, 727, 1986. [BCN88] Rodney A. Brooks, Jonathan H. Connell, and Peter Ning. Herbert: A second gen-eration mobile robot. Technical report, MIT AI Lab, January 1988. A. I. Memo 1016. [Bra84] Valentino Braitenberg. Vehicles: Experiments in Synthetic Psychology. The MIT Press, 1984. [Bro86] Rodney A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2(1), March 1986. [Bro87a] Rodney A. Brooks. A hardware retargetable distributed layered architecture for mobile robot control. In Proceeding IEEE Robotics and automation, 1987. [Bro87b] Rodney A. Brooks. Micro-brains for micro-brawn; autonomous microbots. IEEE Conference on Mobile Bobots, 1987. [Bro87c] Rodney A. Brooks. Planning is just a way of avoiding figuring out what to do next, September 1987. Working Paper 303. [Bro88a] Rodney A. Brooks. Intelligence without representation, 1988. [Bro88b] Rodney A. Brooks. A robot that walks; emergent behaviors from a carefully evolved network, September 1988. [Bro89] Rodney Brooks. Software development system 6811 assembler and subsumption com-piler. Technical report, MIT AI Lab, 1989. [Con88] Jonathan H. Connell. A behavior-based arm controller, 1988. draft. [FGL87] K. S. Fu, R. C. Gonzalez, and G. S. G. Lee. Robotics: Control, Sensing, Vision, and Intelligence. McGraw-Hill Book Company, 1987. 101 A Behavioral Approach for Open Robot Systems 102 [FT89] Ian Foster and Steven Taylor. Strand: New Concepts in Parallel Programming. Pren-tice Hall, 1989. [HH88] Vincent Hayward and Samad Hayati. Kali: An environment for the programming and control of cooperative. Technical report, McGill University, 1988. TR-CIM-88-8. [Hub88] B. A. Huberman. The ecology of computation. In B. A. Huberman, editor, The Ecology of Computation. Elsevier Science Publishers B.V.(North-Holland), 1988. [Kae87] Leslie P. Kaelbling. An architecture for intelligent reactive systems. In Reasoning about Actions and Plans: Proceedings of the 1986 Workshop. Morgan Kaufmann, Los Altos, California, 1987. [Kae88] Leslie Pack Kaelbling. Goals as parallel programs specifications. In AAAI 1988, 1988. [Kaf88] Dennis Kafura. Concurrent object-oriented real-time systems research. Technical Report TR 88-47, Virginia Polytechnic Institute and State University, 1988. [KM88] Kenneth M. Kahn and Mark S. Miller. Language design and open systems. In B. A. Huberman, editor, The Ecology of Computation. Elsevier Science Publishers B.V.(North-Holland), 1988. [MML89] I. Jane Mulligan, Alan K. Mackworth, and Peter D. Lawrence. A model-based vision system for manipulator position sensing. Technical Report Technical Report 89-13, Computer Science Department, UBC, 1989. [Moc88] Jeffrey Mock. Process, channels and semaphores (version 2), 1988. Manual for Parallel C from Logical System. [NSH88] Sundar Narasimhan, David M. Siegel, and John M. Hollerbach. A standard archi-tecture for controlling robots. Technical Report A.L Memo No. 977, MIT AI Lab, 1988. [PCW87] David Lorge Parnas, Paul C. Clements, and David M. Weiss. The modular struc-ture of complex systems. In Gerald E. Peterson, editor, Object-Oriented Computing. Computer Society Press, 1987. [P0088] Joseph Kin-Shing Poon. Kinematic control of robots with multiprocessors. Technical report, E.E. Department UBC, 1988. Ph.D. thesis. [RK87] Stanley J. Rosenschein and Leslie Pack Kaelbling. The synthesis of digital machines with provable epistemic properties. Technical report, SRI International, April 1987. Technical Note 412. [RK89] Stanley J. Rosenschein and Leslie Pack Kaelbling. Integrating planning and reactive control, 1989. A Behavioral Approach for Open Robot Systems 103 [Ros85] Stanley J. Rosenschein. Formal theories of knowledge in ai and robotics. New Gen-eration Computing, 3, 1985. [Ros89] Stanley J. Rosenschein. Synthesizing information-tacking automata from environ-ment description. In 1st International Conference on Knowledge Representation and Reasoning, May 1989. [RSI88] RSI. Excalibur : User's manual, April 1988. [Sal88] Lou Salkind. The sage operating system. Technical Report Technical Report No. 401, Courant Institute of Mathematical Sciences, September 1988. [Sar89] Vijay Anand Saraswat. Concurrent constraint programming languages. Technical report, Computer Science Department, Carnegie-Mellon University, 1989. Ph. D. thesis. [Sha87] Ehud Shapiro, editor. Concurrent Prolog. MIT press, 1987. [SM87] T. Smither and C. Malcolm. A behavioral approach to robot task planning and off-line programming. Technical report, DAI University of Edinburgh, 1987. DAI Reasearch Paper No. 306. [Str86] Bjarne Stroustrup. The C++ Programming Language. Addison-Wesley Publishing Company, 1986. [Zha88] Ying Zhang. Fcp+H a concurrent logic object oriented language based on fcp, 1988. Project Report. [ZQ89] Ying Zhang and Runping Qi. Programming in parallel C++, September 1989. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0051235/manifest

Comment

Related Items