Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Situated observation and participation in multiple-agent systems Montgomery, Jefferson D. 2003

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_2003-0583.pdf [ 5.86MB ]
JSON: 831-1.0051222.json
JSON-LD: 831-1.0051222-ld.json
RDF/XML (Pretty): 831-1.0051222-rdf.xml
RDF/JSON: 831-1.0051222-rdf.json
Turtle: 831-1.0051222-turtle.txt
N-Triples: 831-1.0051222-rdf-ntriples.txt
Original Record: 831-1.0051222-source.json
Full Text

Full Text

Situated Observation and Participation in Multiple-Agent Systems by Jefferson D. Montgomery B.A.Sc, University of British Columbia, 2000 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Mas te r of Science in THE FACULTY OF GRADUATE STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standjard The University of British Columbia July 2003 © Jefferson D. Montgomery, 2003 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the bead of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Computer Science The University of British Columbia Vancouver, Canada September 30, 2003 Abstract A situated agent is not only embedded within its environmental system, but forms an integral and active component of the system as a whole. Accordingly, there are numerous requirements that a situated agent must meet including synchroniza-tion with and responsiveness to the system dynamics as well as appropriate and proactive operation — a non-trivial challenge made difficult primarily due to vary-ing environment states and dynamics, difficult tasks with ambitious requirements, and noisy or ambiguous sensory and background information. This thesis addresses these issues by introducing an agent architecture with an interruptible and modular design capable of producing quality-varying solutions based on constraints. The architecture adaptively schedules deliberation processes which enables the agent to evolve its action with the natural frequency of the envi-ronment's dynamics. Resource-bounded deliberation is attained by attributing intentions to the agent, posed in the form of constraints, that influence its action. The thesis demon-strates how these constraints can be designed to achieve complex behaviours. Fur-ther, it shows that abstraction of behaviours based on a theory of intentionality provides a comfortable and tractable model not only for behaviour generation but also for behaviour recognition. The theory of intentionality is transcribed into a dy-namic, probabilistic model that describes multiple, intentional agents and their en-vironment. It is shown how the properties of intentionality, so transcribed, produce a computationally attractive model that allows real-time approximate inference. Ultimately, a novel agent is introduced for the particular multiple-agent do-main of robot soccer. The agent is implemented and tested to study the reactivity of the architecture, accuracy of the environmental modelling, and effectiveness of its deliberation. n Contents Abstract h Contents iii List of Figures vi List of Tables ix 1 Introduction 1 1.1 Reading guide 3 2 Background 5 2.1 Agent architectures 6 2.1.1 Constraint Nets 7 2.1.2 The intentional stance 9 2.2 Probabilistic modelling 11 2.2.1 Inference 12 2.2.2 Parameter estimation 14 2.3 Internal representations: modelling the world 16 2.3.1 Markov state traces 17 2.3.2 Imperfect perception 18 2.3.3 Self-localization 19 2.3.4 Identifying objects 22 2.3.5 Modelling the behaviours of other agents 22 2.4 The RoboCup simulation league 26 2.4.1 Object dynamics 27 2.4.2 Stimulation 27 2.4.3 Internal timing 29 2.4.4 Action failure 30 2.4.5 Interacting with the SoccerServer 30 iii 2.5 Summary 33 3 A Constraint-based Agent Architecture 35 3.1 Adaptive resource scheduling 37 3.2 Interpreting observations 38 3.3 Deliberation 38 3.4 Implementation 39 4 Modelling Environment and Agent Behaviours 40 4.1 Agent beliefs 41 4.2 Agent intentions 43 4.2.1 Intent formation 44 4.2.2 Intent abandonment 45 4.3 Agent desires 46 4.4 Agent action 46 4.5 Inference 47 4.5.1 Forward inference 48 4.5.2 Backward inference 49 4.5.3 Approximate inference 50 4.6 Parameter estimation 53 4.7 How to use the model 57 5 Application to the RoboCup Simulation League 59 5.1 Adaptive resource scheduling 60 5.1.1 Estimating the internal cycle offset 61 5.1.2 Estimating the optimal time to command action 62 5.1.3 Confirming commanded action 63 5.2 Interpreting observations 65 5.2.1 Initial world states 66 5.2.2 World state transitions 67 5.2.3 World state perception 68 5.3 Deliberation 71 5.3.1 The Scan constraint 73 5.3.2 The LookAround constraint 75 5.3.3 The LookAt constraint 75 5.3.4 The BeforeKickoff constraint 75 5.3.5 The GoTo constraint 76 5.3.6 The InterceptBah constraint 77 5.3.7 The F'layerPosition constraint 77 iv 5.3.8 The GoaliePosition constraint 78 5.3.9 The DefensivePosition constraint 80 5.3.10 The MiddlePosition constraint 80 5.3.11 The OffensivePosition constraint 81 5.4 The RoboCup Simulation League agent 82 6 Experimental Results For The R S L Agent 83 6.1 Adaptive resource scheduling 84 6.2 Interpreting observations 85 6.2.1 Static self-localization 85 6.2.2 Dynamic self-localization 87 6.2.3 Object tracking . . .' 89 6.2.4 Modelling multiple agents 90 7 Conclusions 100 7.1 Avenues of future work 101 Bibliography 103 Equation Derivations 114 Rearrangement of i;he equations of dynamics 114 Modelling the RSL visual sensor 115 Deriving a stochastic transition model for RSL objects 116 Computing optimal trajectories 117 Glossary of Variables 118 World state variables 118 World state observations 119 Modelling behaviour 119 Adaptive resource scheduling 121 SoccerServer parameters 121 v Lis t of Figures 2.1 Graphical representation of Constraint Net components 8 2.2 Graphical representation of a Constraint Net module 8 2.3 A Constraint Net model of a single-agent system 9 2.4 A Markov chain model of vector evolution 18 2.5 A hidden Markov model of dynamic perception 19 2.6 Landmarks in the RoboCup Simulation League domain 20 2.7 Localization using two landmarks 20 2.8 The "Two-landmark Uniform" Localization sensor 21 2.9 Modelling agent action 23 2.10 A mixture of hidden Markov models 24 2.11 A switched state-space model 24 2.12 The dynamic belief network representation of a hierarchical hidden Markov model 25 2.13 The SoccerServer Constraint Net module 27 2.14 Event timing within the SoccerServer module 29 2.15 Typical fluctuations in the true and observed SoccerServer cycle . 32 2.16 Comparing the Internal and External synchronization algorithms . . 33 3.1 A Constraint Net module of the constraint-based architecture . . . . 36 4.1 A hierarchical, probabilistic model of multiple, intentional agents act-ing in the world 42 4.2 Commitment and management of individual intentions, showing context-specific independence 45 4.3 Depiction of the model's secondary structure 49 4.4 Independence relations among sets of modelled random variables . . 51 4.5 An approximate representation of a ring sector 53 4.6 An example of Bayesian filtering 54 5.1 A Constraint Nets model of the RoboCup Simulation League system 60 v i 5.2 The Schedule Constraint Net module 61 5.3 Typical variation in proprioception times and a resulting estimate of the internal cycle offset 62 5.4 Adapting the estimate of the optimal time to act 64 5.5 A collection of landmarks perceived in a single time step 69 5.6 A n example of using multiple observations to reduce perception am-biguity 70 5.7 Possible observation associations that can be made when object iden-tification is ambiguous 72 5.8 A n example decision process of a goalie agent 73 5.9 Trajectories produced with the GoTo constraint 77 5.10 Illustration of the desired position used by the GoaliePosition constraint 79 5.11 Illustration of the attractive forces applied to the agent's position by the DefensivePosition constraint 81 5.12 Illustration of the constraint on an agent's position provided by M i d -dlePosition 82 5.13 Illustration of the constraint on an agent's position provided by Of-fensivePosition 82 6.1 Distributions of correct, late, and failed actions observed during the SYNCHRONIZATION experiment 84 6.2 Localization errors obtained in the STATIC SELF-LOCALIZATION ex-periment 86 6.3 Expected localization errors caused by poor approximation of the true perception distribution 88 6.4 Localization errors obtained by the DYNAMIC SELF-LOCALIZATION experiment 89 6.5 Results of the OBJECT TRACKING experiment showing errors in be-lieved object position and velocity 90 6.6 Convergence of the training and test set likelihoods while optimizing model parameters 93 6.7 The likelihood of randomly selected and optimized models 93 6.8 The time complexity of exact and approximate inference algorithms for the 2-ON-O ATTACK experiment 94 6.9 The evolution of joint and individual intentions throughout an exam-ple sequence . 97 6.10 Notable moments from a particular 2-ON-O ATTACK example . . . . 98 6.10 Continued 99 vii A partially observable Markov decision process modelling a planning agent viii List of Tables 2.1 SoccerServer rules for effecting action 30 5.1 Inferring the validity of the op t imal command time estimate assuming known cycle offset 63 5.2 Inferring the status of pending action commands from the count of effected actions 65 5.3 Rules used for labell ing observations 71 5.4 Descr ipt ion of agent actions 74 5.5 Summary of constraints 74 6.1 Compar ison of errors obtained in the STATIC SELF-LOCALIZATION experiment 87 6.2 Local iza t ion errors obtained in the D Y N A M I C S E L F - L O C A L I Z A T I O N experiment 88 6.3 Local iza t ion errors obtained in the OBJECT TRACKING experiment . 89 6.4 The action confusion matr ix produced by the hand-crafted model . . 96 6.5 The action confusion matr ix produced by the best learned model . . 96 ix Chapter 1 Introduct ion In a reception setting, a good waiter is one who always brings hors d'oeuvres to people who desire them, without bothering those distracted in conversation. Some-times, however, distinguishing these states can be difficult (particularly as they are neither exclusive or covering). Therefore, a significant constituent of a waiter's job is analyzing customer':? behaviour, as subtle as they may be. But waiters are not the only agents who benefit from an understanding of how their environment behaves. A n understanding of behaviour can both explain and predict not only concrete inter-ests such as the actions of fellow agents, but also characteristics like their preferences and commitments. For these reasons, behaviour analysis is an important precursor to any decision that relates to interaction with another entity. However, there are, significant hindrances to performing such detailed anal-ysis in time-critical or ambiguous scenarios because causal abduction of behaviour constitutes a difficult problem that requires significant resources. This thesis con-siders situated observers: agents who perceive and act as non-trivial participants in their environment. A waiter is a situated observer — it is an agent situated in a reception environment with other agents, and it needs to both observe the other agents (interpreting their actions) as well as perform its own actions (to restock food, serve guests, and avoid other waiters among other tasks). Developing such a detailed understanding of the scenario by interpreting the environment is a considerable task for a resource-bounded agent who must also devote sufficient resources to processes such as perception, reasoning, and action; which are fundamental to appropriate operation. The logistical difficulty leads to considering deliberation and modelling value: is it worth considering more options to make a better decision? At some point, devoting more resources to deliberation in order to better understand the situation may actually decrease the value of the resulting decision [RW39, BD94, DBS94, ZR96]. In sufficiently dynamic environments, the competing forces of decision value 1 and the cost of inaction counteract; a phenomenon that a successful agent must actively consider. The first part of this thesis introduces such an agent, designed around a notion of adaptive reactivity where each agent process is interruptible by a meta-level reasoner that schedules resources. This reasoner implements a novel scheduling algorithm that interprets percepts using an action failure model and allows the agent to infer whether it is being sufficiently reactive for the current environment dynamics — an inference that directly influences the action scheduling decision. When an action is desired, all the other processes combine to produce the best action discovered so far. In this way, the satisfaction of the reactivity constraints upon the situated agent is ensured. Yet, the design of useful, interruptible processes which constitute the agent's architecture and combine to produce intelligent behaviour is non-trivial. This thesis will introduce a constraint-based architecture, wherein each constituent is posed in terms of constraints which are solved concurrently and in an interruptible manner; and demonstrates that three main processes (scene interpretation, means-ends rea-soning, and planning) can be implemented with sufficient collective performance in a demanding task. This architecture will be shown to provide a suitable framework for providing and managing resources to allow behaviour analysis of the external environment by a complex agent operating in a dynamic environment. However, performing behaviour analysis in itself is non-trivial — analyzing behaviour can be so complex that a punctilious description of general behaviour is infeasible. Even restricted domains such as hockey and soccer exhibit incredibly rich interactions. One approximation commonly used by people is to project a complicated be-haviour onto their own experiences and use their own behaviour as a description. For example, one may anthropomorphicaHy say that a robot is lonely, bored, or hungry, or that it is collaborating with a comrade who has a common desire when actually quite different motivations are contributing to the observed behaviour [Bra84]. Re-searchers have proposed this method as a general behaviour description; even the actions of such devices as water valves and telephones can be described in terms of beliefs and intentions ascribed to them [McC79, Den87, CL90]. This approximation is believed to be more easily understood, leading to efficient design, modification and maintenance of behaviour models; but at some level these benefits are over-come by the greater accuracy of descriptions based on more realistic causal models. Ultimately, a projection to the observer's experiences can not be as accurate as a correct understanding of the behaviour's cause. Using Occam's razor, these ap-proximate descriptions based on intentional ideas can only be considered legitimate if they provide the most succinct — yet still useful — description of the agent in question. This thesis proposes that probabilistic models of multiple agents' behaviour based on a projection to assumed intentional mental attributes can accurately in-terpret the environment using limited resources. The underlying hypothesis is that these mental attributes provide a useful, succinct description of the behaviour. To test this hypothesis, a reactive situated agent will be developed that uses a prob-abilistic intentional model of a multiple-agent environment to interpret its obser-vations. The model is based on a theory of joint and individual intentions that considers the actions, beliefs, desires, and intentions of groups of agents, and in-cludes rigorous modelling of the Robot Soccer World Cup (RoboCup) Simulation League (RSL) domain. This agent's internal representations will be compared with the true state of the environment (known to the experimenter but hidden to the agent itself) as it engages in various tasks, allowing a characterization of the model's accuracy and of the agent's success in dynamic situations with varying resource re-quirements and uncertain percepts. However, the ultimate step (making full use of this internal representation within the observer's decision process) is left for future work. Therefore, some aspects of the hypothesis are only confirmed qualitatively. Future work will include fully incorporating the intentional environment model into the observer's decision process, and quantitatively analyzing its value based on the utility of resulting action. 1.1 Reading Guide The thesis is organized into six more chapters. Chapter 2 provides a foundation for the remainder of the thesis. Background information related to the organization of an agent's internal system is provided along with probabilistic methods of modelling an environment and their use as an agent's internal representation, and the example domain of robot soccer. The thesis continues with novel work. In Chapter 3, a constraint-based archi-tecture is introduced conceptually. The architecture focuses on issues of deliberation, observation interpretation, and the management of resources so that appropriate and timely action can be achieved in demanding environments. Architecture specifics are then provided in the next two chapters. In Chapter 4, a probabilistic model of a multiple-agent environment is de-veloped and theoretically analyzed. The model describes the environment and a group of agents acting within it, and is to be used as the internal representation of an observer. The inherent complexity of the domain is handled by assuming that the observed agents are intentional entities, and that their behaviour therefore con-3 forms to the restrictions of the theory of intentionality. However, the observer's own intentionality, and intentional interaction with the group, is not modelled. The architecture and model are applied in a specific soccer agent in Chap-ter 5. This example is suitable for exploring fundamental issues of agency within multiple-agent systems, and provides context for a more meticulous treatment of both the architecture and intentional representations. Ultimately, in Chapter 6, the soccer agent is used to experimentally verify the properties of both the architecture and the intentional model. Finally, Chapter 7 outlines conclusions that can be made from the theoretical and empirical analysis along with possible avenues of future inquiry. 4 Chapter 2 Background This thesis' topic is of agents, environments, and their interaction. The exploration will be from the point of view of a particular, rational agent that exists in an environ-ment with other agents. A rational agent is any entity that acts in its environment and whose action is appropriate (i.e., contributes to achieving the agent's goals) given its knowledge and beliefs [PMG98, RN95]. This can be stated more formally using utility theory (s, utility measure can be used to gauge the appropriateness of any action based on the desirability of particular situations compared to others) where an ideal agent is one that acts to maximize its expected utility measure based on its internal knowledge and its percept history [PMG98, RN95]. This constraini; implicitly defines the agent's policy — a complete mapping from situations into actions — which can be used to describe the agent. However, complicated tasks in complicated environments necessitate complicated policies. For example, the complexity of the situation-action mapping implemented by the human brain is vast. Such policies cannot be described by simple functions or tables. To cope with such complexity, agents are modelled based on a particular structure or architecture, which constrains the mapping. For example, a policy may be described using a collection of processes characterizing perception, modelling, deliberation, and action. Section 2.1 outlines a number of issues related to the specification of an agent architecture, and introduces (in Section 2.1.2) the notion of the Intentional Stance, whereby complex behaviours are achieved by resource-bounded agents using mental attitudes such as belief and desire, and through the use of intentions. Because it forms a fundamental influencer of action, it is important that an agent correctly interpret its percept trace with respect to its internal knowl-edge. In general, processes characterized by uncertainty, like perception, require a representation that addresses that uncertainty. Probabilistic models, introduced in Section 2.2, provide a sound theoretical framework for representing uncertainty, and 5 Section 2.3 details how an agent can use probabilistic models to determine the most probable situation. Although these: aspects of agency will be discussed in general scope, domain and task specifics ultimately refine the agent's design and it is important — in fact, necessary in most cases — to capitalize on this knowledge. Throughout this thesis, reference is made to the RoboCup Cup Simulation League (RSL) domain: a multiple-agent domain with a wealth of interaction complexity that requires de-liberative, situated agents [KAK +97]. Various elements of the RSL domain are important to the development of a RSL agent, and are discussed in Section 2.4. 2.1 Agent Architectures An agent architecture is a framework for designing and constructing agents, adher-ence to which will produce an agent with specific, desired properties. For example, ideal agents exhibit generality, rationality, reactivity, benevolence, social ability, autonomy, and versatility, among other characteristics [WJ95]. However, many of these properties are incompatible and a correct treatment is accordingly difficult. Initial attempts; were based on the symbol-system hypothesis, and were char-acterized by an iterative sequence of sensing, modelling, planning, and acting, wherein the state of the environment is translated into discrete, logical descrip-tions in order to fit existing ideas about deliberation [Nil80]. This, however, has proved largely inadequate, particularly in environments that contain uncertain or unpredictable elements: researchers found a clean signal-symbol interface elusive. In a dramatic rethinking of the notion of agency, focus shifted toward sit-uatedness, wherein the agent's connection with its environment is of primary con-cern [Mac93, Mor84, Bro91]. A host of architectures were developed based on this reactive or behaviour-based paradigm, each abandoning the generality and expected power of formal systems for increased reactivity [Bro86, Ark98, AC87, KR90]. These architectures rejected internal representations and formal reasoning, preferring a tight coupling of sensation and action through distributed, asynchronous processes. Initially, such principles proved surprisingly adept, but over time it became clear that their success did not scale up to highly complex systems [HP91]. A series of hybrid architectures have been developed that borrow from both reactive and deliberative architectures trying to achieve a working compromise be-tween reactivity and p>roactivity [GL87, Fer92, SM94]. The idea of asynchronous layers of operation is commonly adopted from the reactive paradigm, but the lay-ers are typically allowed to be more general: lower layers tightly couple sensors to actuators with adaptable transfer functions and feedback control mechanisms that 6 give the agent reactive abilities in its most critical situations; higher layers add in-creasingly abstract behaviours, reasoning, and representations. Most importantly, computational abstraction is embraced using fully general communication between layers. Yet, the protocol of these interactions often tend to be ad hoc and cumber-some, and global system characteristics can be difficult to analyze. However, the development of a hybrid architectural language called Constraint Nets provides a unifying framework that can be used to design, model, and verify both local and global aspects of these hybrid dynamical systems [Zha94, ZM95, ZM99]. See [WJ95, KBM98, ZCLCOO] for a more complete survey of agent archi-tectures; the third reference presents a survey of architectures used by RoboCup agents. 2.1.1 Constraint Nets Most dynamical systems are based on constraints, i.e., the system is asymptotically stable at the solution set of particular constraints. This motivates problem solving via a constraint-based approach, which involves specifying a collection of constraints — relations that specify a particular situation (e.g., a solution to the problem) — and allowing a constraint solver to asymptotically approach a solution through a dynamic process. This method is conceptually clear, and often engenders easily tasked systems. This attraction is a motivating factor behind the Constraint Nets model [ZM95]. Constraint Nets (CN) presents a unified theory with a rigorous, formal semantics of dynamic systems based on topological and abstract algebra structures. Such semantics capture the most general structure of dynamic systems, including the flexibility to define concepts in both discrete and continuous abstract domains with both synchronous and asynchronous event structures. Formally, a Constraint Net is a triple composed of a set of locations, a set of transductions, and a set of directed connections. A location (represented graphically as a labelled circle as shown in Figure 2.1) stores an abstract value, which can be accessed or modified by a transduc-tion through a connection. The value can be continuous or discrete and can represent a variable or an event. A transduction (represented graphically as a labelled rectangle) represents a func-tion from its input ports to a unitary output port (represented as the head and tail of directed arcs respectively). Transductions operate asynchronously in general, but can be controlled by events through special event ports (represented graphically by small circles 7 incident upon a transduction). For example, the transduction 5(i; e i , e"2, • • •) delays the input trace, i, until one of the connected events, e i , e 2 , • • •, occurs. A connection (represented graphically as a directed arc) connects a location with a transduction port. Figure 2.1: A Constraint Net is represented graphically as a bipartite graph with locations depicted as circles, transductions as rectangles, and connections as directed arcs. This particular CN represents the equations l0 = fn(li,pt) and pt = S(l0;ev), which aggregate to l0(n) = fn(li(n),l0(n — 1)) where l(n) represents the value of location I after n occurrences of event ev. Much of the power of CN comes from the abstract treatment of algebraic structures. This allows locations and transductions to coexist in harmony, despite differences in domain or period, and provides a comfortable method of functional abstraction. A module in CN is the collection of a finite set of transductions, loca-tions, and connections that collectively act as an abstract transduction. Graphically, as shown in Figure 2.2, modules are depicted as labelled, rounded rectangles. When depicting the internal structure of a module, the rectangle is drawn using dashed lines and port locations are drawn as double circles. Modules can be aggregated via union, coalescence, or hiding operations to provide interaction hierarchies that increase the expressive power and ease of the modelling task. Figure 2.2: Graphical representation of a Constraint Net module depicting the in-ternal structure of module Md (left) and the same module used in a CN (right). The feedback is considered implicit and internal to the Md module. 8 Experiments provide evidence that the constraint-based CN approach is a clean and practical design framework for perceptual agents [BKL + 93, Zha98, ZM99, Mon02]. The entire agent system (the agent's controller, body, and the environment) can be modelled in CN [Zha94, PMG98] as in Figure 2.3 which models a single-agent system. Agent Controller Environment Figure 2.3: A Constraint Net model of a single-agent system, depicting the agent's controller and body along with its environment as separate modules. 2.1.2 The Intentional Stance Many interesting agents are far too complex to be described by simple sets of formal rules. When physical explanations are impractical, more powerful abstractions are needed to explain behaviour. This idea has resulted in shifts in popular psychological paradigms, rejecting stimulus-response rules of behaviour for more abstract notions such as functionalism. Though not logically required (and perhaps not even valid) such abstractions are legitimate in the sense that they can provide succinct, explana-tory models of an agent's behaviour, even when its policy is unknown [McC79]. The intentional stance is such an abstraction whereby behaviour is modelled through mental attitudes (such as believing, desiring, anticipating, etc) which are attributed to the agent [Den87]. However, the attribution is more than just an-thropomorphism: mental attitudes and their interactions are strictly defined and the intentional stance abstraction also presupposes that agents will act as in con-formance with these defined influences. Although the attribution of attitudes provides a convenient and familiar framework for an observer to describe, predict, and specify behaviour, the intentional stance model fails if the observed agent's behaviour is generated by an inconsistent 9 mechanism. There are, however, convincing reasons that a resource-bounded agent would conform to these rules. Agent beliefs, for example, are useful in modelling rational behaviour that may be otherwise counter-intuitive given the true state of the world. An agent's beliefs are propositions that capture its perspective on the world (which may be incorrect) and represents information immediately available to the agent [RG91]. Beliefs are distinguished from percepts through conscious control — seeing is not believmg in this sense: beliefs and percepts can be inconsistent (and beliefs can be internally consistent as well). For example, I can see water falling from the sky, and still believe that it is not raining. Also, although I believe that it is not raining, I can simultaneously believe that I will get wet outside from the rain. Also, because means-ends reasoning competes with reactivity in resource-bounded agents, these: agents must constrain their deliberation in some way. Inten-tions perform this critical role in a general way by significantly reducing the domain of the agent's policy [Bra87, CL90, BIP98]. An intention is a commitment to bring about a particular proposition [Bra87, CL90]. The commitment is strong in that it persists until the agent believes it has succeeded, or never will (in which case the agent may abandon the intention). Further, this commitment encompasses the following implications upon further deliberation and means-ends reasoning: 1. Agents are bound to satisfy their intentions, i.e. any rational action must be performed in a direct attempt to achieve one of the agent's intentions, based on the current beliefs of the actor. 2. Candidate propositions cannot be adopted unless they are consistent with the current intentions (but agents need not intend the side effects of their intentions). 3. In some circumstances, agents believe that they will satisfy their intentions but they always believe that it is possible to satisfy their intentions and they never believe that they will not satisfy their intentions. In multi-agent systems, agents interact though cooperation, coordination, or competition. These interactions can be described in a number of ways, but requires some form of mutual support and commitment between agents [Bra92, Sea90]. One such description is through the use of joint intentions [LCN90]. A joint intention is a commitment by a group of agents to bring about a proposition, which persists until the group comes to mutually believe in their success or ultimate failure (i.e., they will never succeed). The commitment constrains future deliberation and means-ends reasoning concerning group activity by requiring all joint intentions to be self-consistent as well as being consistent with each member's individual intentions. 10 Joint intentions are a natural extension of the intentional stance for single agent systems but the amount of interaction required to determine theoretical mu-tual belief and knowledge can be impractical. Implicit mutual belief can be assumed for groups of agents that are likely to share similar desires (among teammates for example) although the same can not be said about dissimilar agents [BBS+00]. The intentional stance also has been used as a means of specifying behaviours by a number of successful agent architectures [BIP98, GL87, RG91]. However, this thesis will focus on its use as an explanatory model of agent behaviour. 2.2 Probab i l i s t i c M o d e l l i n g Realistic agents are subject to ambiguous and fallible sensors. For example, any two-dimensional projection of a three-dimensional environment is subject to various perspective ambiguities. Consequently, agents are often faced with decisions under uncertainty, and are only considered rational (in terms of decision theory) if they consider all of their options using probabilistic models and probability theory. A probabilistic model is a collection of random variables that describe a situation. A random variable (noted as a bold, capital symbol such as Z) is a mapping from a specific outcome z into a scalar measure and the probability of that outcome is noted as Pr(Z = z). For any collection of random variables, Z 1 : 7 V =f {Z 1 , Z 2 , . . . , Z ^ } 1 , statistical correlations and dependencies can be fully described by a belief network2 [Pea88, CDLS99]. A belief network (BN) is a directed, acyclic graph (whose nodes repre-sent random variables and whose arcs represent conditional dependence) with prior and conditional probability distributions over the constituent random variables. A B N encodes a particular factorization of the variables' joint distribution. Specifi-cally, the semantics of a B N , BQ, are defined by Equation 2.1 where Parent(Z; So) is the set of variables who are parents of node Z in graph BQ. N Pr(Z 1 : i V ) = JJPr (Z i |Paren t (Z i ;B 0 ) ) (2.1) i=i When variables can change over time, time and its relation to the variables must be modelled either explicitly (by representing time as a specific random vari-able) or implicitly (by modelling transitions in the belief state over time). Dy-JThe notation XaM and Xa..b refer to the sets {Xa,Xa+1,... ,Xb} and {Xa, Xa+i,..., Xb} respectively. If a > b, the set is empty and if a = b the set contains only Xa (or Xa). 2Also called a Bayesien network or a probabilistic network. 11 namic belief networks provide a framework for the latter, focusing on the exploita-tion of temporal independence [Mur02]. A dynamic belief network (DBN) is defined as two BNs which aggregate to model semi-infinite sequences of random variables, ZQN , . . . , Z \ : N , . . . : BQ describes a prior probability function over the variables, Pr(Zjy' n), and B^ describes the variable's stochastic evolution over time, P r ( Z ^ | Z Q : ^ r ) . These two networks define the full joint distribution of the variable sequence, based on the semantics of each individual B N , as Equation 2.3. P r ( Z ^ ) = Pr(Zr) II Pr(Zr | Z ^ _ ! ) (2.2) T = 0 N t N = JIP:r(ZJ,|Paient(ZJ,;5o)) II P r ( Z r l P a r e n t ( Z i ; ( 2- 3) 2=1 T = 0 j = l 2.2.1 Inference In general, the collection of modelled variables can be separated into two sets: an ev-idential set Y containing those variables who can be observed directly, and a hidden set Q containing those; who have been hypothesized, Z 1 : J V = { Y , Q}. General infer-ence on a dynamic belief network is the determination of the joint distribution over unobserved variables conditioned upon specific evidence, Pr(Qo :0o> Y i + i : o o | Y o : t = yo-t)- Motivated by the structure evident in Equation 2.2, useful inference in a D B N can be implemented iteratively by incrementally predicting future states and then updating the predictions from incoming evidence [RN95, Mur02]. This procedure, called Bayesian filtering, maintains the filtering distribution, P r ( Q t | Y u : t ) , using the following two steps. 1. During the prediction step, a distribution over the random values at the next time step is calculated. At the initial time, this is the prior distribution which is specified as part of the model. At any other time, the distribution is calculated by considering possible transitions from the current filtering distribution: P r ( Q t + i | Y 0 : { ) = / • • • / P r ( Q t + i , P a r e n t ( Q m ; £ U ) | Y 0 : t ) (2.4) Parent(Qt+i;B_) 2. To obtain the current filtering distribution, the prediction is then updated by incorporating current evidence into the current instantiation of BQ: 12 Pr(Q*|Y 0 : t ) = P r ( Q d Y 0 : t - 1 ) P r p ( ^ Y ° : t - l ) (2.5) P r ( I t\ *0:t-l) A host of algorithms exist that perform inference analytically [Pea88, Rab89, Sha86, Sha90, CDLS99]. However, this is generally NP-hard in the size of the BN graphs [Coo90, RN95]. This means that for certain networks of sufficient dimension (large number of random variables or continuous random variables) or insufficient conditionalization (grid-like graphs or graphs containing nodes with high fan-in) exact inference takes prohibitive amounts of time. For situated agents, this is a real concern and approximate methods become necessary. These methods approximate distributions in such a way as to simplify the inference task — although inference remains NP-hard in the accuracy of the estimates [DGH92]. Sequential Monte Carlo Sampling Sequential Monte Carlo sampling3 is a method for stochastically approximating an evolving probability distribution. The approximation is to represent the distribu-tion with a collection of randomly located delta functions, fit to the distribution of interest [Dou98, DFG01, GRS96, FTBD01]. For example, the filtered distribution in a DBN's current slice can be approximated using Equation 2.6: N Pr(Q 4 = g | Y 0 : t ) « Pr(Q t = q\Y0:t;N) d=f £ w® 5k(q - q?) (2.6) i=l where 5k is the delta function, defined in Equation 2.7. <• / x , ( l if x = 0 8k(x) dx = \ (2.7) domain(x) [0 else The approximation is parameterized by the path of N pairs, or particles: qf1 (which are sampled randomly from a proposal distribution, p(Qo:i|YU :t), whose support is a superset of the support of Pr(Qo :t|YU :t)) and (which weight the importance of each sample according to the true distribution via Equation 2.8). K Pr (Q 0 : t = | i | Y 0 : i = ^ ) ( 2 g ) P(Qo:t = %: t l Y 0 : t = V0:t) 3Also known as sequential importance sampling, particle filtering, or the CONDENSATION algorithm. 13 Ideally, one would sample from the filtered distribution itself, but difficulties in sampling from the distribution of interest lead to different proposal distributions. Because the system dynamics model is often easily parameterized, it is commonly used as the proposal distribution: p(Qo:t-|Yu:t) = Pr(Q t |Q t _i, Y U : t ) [IB98]. How-ever, when observations are highly informative (more so than the dynamics) it is beneficial to sample from the posterior — p(Qo:t| Y U : t ) = Pr(QtlQt-i) Y U : t ) — which minimizes importance weight variance and tends to focus the sample locations onto more informative regions of the distribution of interest [FTBDOO, Dou98]. Monte Carlo sampling approximations are important because, as opposed to using a different but similar-looking model, the nature of the approximation is to focus consideration on a subset of the original model, which is represented with varying resolution and is refined over time. It can be proved that the limit of the approximation goes to the true distribution (Equation 2.9). Although the maximum likelihood and maximum a-posteriori values of the model remain elusive, the easily calculated mean of the approximated distribution is the only statistic of interest in this thesis. lim Pr(Q 0:t|Qo:t;A) = P r ( Q 0 : i | Y 0 ; t ) (2.9) N—too 2 . 2 . 1 . 2 Rao-Blackwellization For some models, the opportunity exists to develop a hybrid inference technique, wherein a subset of the filtering distribution can be analytically marginalized, leav-ing a lower-dimensional approximation [AK77, DdFMROO]. This so-called Rao-Blackwellized estimate is theoretically not less accurate than a strictly sequential Monte Carlo estimate with the same number of samples [Dou98] but has shown empirically to significantly reduce the variance from the true filtering distribution in most cases. 2.2.2 Parameter Estimation: the Expectation Maximization Algo-rithm (EM) Learning a probabilistic model involves estimating the structure (i.e., the conditional dependencies) and the parameters (i.e., the prior and conditional probability func-tions) of a set of random variables. The discussion in this thesis is restricted to the case of known structure wherein some of the random variables of interest cannot be observed in the system. In this case, the problem is to estimate the parameters, 0, of a model with known structure from a set of data, D, that is supposedly generated by it. 14 However, exact Bayesian learning is often intractable. Useful, sub-optimal approximations can be derived using techniques such as gradient descent optimiza-tion of the posterior or likelihood surface4 [BKRK97, Thi95]. One such approach is the Expectation Maximization algorithm (EM) which is proved to converge and has both online and offline formulations [DLR77, NH98, Lau95]. Assuming that the observed history of the system is comprised of indepen-dent and identically distributed sequences (D = {y\.Tl, V\-T2I •••> ^VTM}) then the likelihood is the product of likelihoods from each sequence. Since the maximum of the likelihood's logarithm and the maximum of the likelihood itself are co-located at the same parameter values, the log-likelihood may be considered — replacing the product with a sum over the log-likelihoods of each data sequence. The key to the E M algorithm is to iteratively adapt the parameter estimate along with a distribution, Q, in such a way that a lower bound on the log-likelihood monotonically increases toward a local maximum. For example, the negative free energy, T', of the system is a lower bound based on the concavity of the logarithm relation. logPr(D|0 = 6) M = E l 0 S J •••] *>l(Qo:Tt,Yo:Ti=ylO:Ti;0) (2-10) 1 - 1 Qo-.Ti M > J ^ J • • • j ^ ( Q o ^ f l o g P r t Q o ^ Y o ^ = f e 0 ) - l ° g Q ( Q q : r i ) ] (2.11) I _ 1 Q0:Ti M = X>i(Q;0) (2.12) This lower bound (logPr(D|0) > Ei=i-^(Q; ©)) c a n b e iteratively im-proved using the following procedure, which guarantees that the bound will never decrease. Initialize Qi and 0\. Repeat (increasing k and cycling i through [1,M]), 4 I f a priori knowledge is uninformative, the maximum likelihood estimate approaches the maximum a-posteriori estimate in the limit of infinite amounts of unique training data [Gha98] (a direct result of Bayes' theorem). 15 'E ' step: Qk+i - argmax ^ ( Q ; ^ ) Q (2.13) ' M ' step 6k+i = argmax Fi(Qk+i\0) 6 (2.14) U n t i l convergence. It can be shown that the distribution that minimizes the free energy for fixed parameters (the 'E ' step of the algorithm) is the posterior over hidden vari-ables5 [Bil97]. Therefore, the iteration can be conglomerated into one step — Equa-tion 2.15 — which can be described as iterative maximization over the expected values of local terms involving each node and its parents with respect to the poste-rior distribution over hidden variables: 6k+l = argmax / . . . / P r ( Q 0 : T i | Y 0 : T i = yb:Ti; dk) log Pr(Q 0 : T i , Y o ^ = VIT^0) In general, these expectations are intractable to compute. However, a gen-eral, approximate solution can be obtained by randomly sampling a finite set of hypotheses from Pr(Qo :T i |Yo :T i) [WT90]. By only considering those hypotheses, the integral reduces to a finite sum, significantly reducing complexity. This Monte Carlo integration converges to the true solution in the limit of infinite samples but monotone convergence can otherwise not be guaranteed [Bis94]. In the unruly case where the posterior distribution over hidden variables cannot be sampled tractably, a more complicated sampling technique such as Markov Chain Monte Carlo or Gibbs sampling [GRS96] must be used. 2.3 Internal Representations: Modelling the World The solution of most difficult problems benefits from reasoning about the problem's domain at some level of abstraction. For example, there is intuitive information that a soccer-playing agent would require to solve the problem of getting the ball into a Variational methods use other intuitive distributions and try to minimize the distance from this posterior. Such maximization is approximate, and is only used when expectations over the posterior can not be computed, nor sampled, in a tractable manner. Qk+l(Q0:Ti) = Pr (Qo :T i |Y 0 : r i = Vo-.T^Ok) e Qo:Tt (2.15) 16 goal: the locations of the ball and the goal, and how to move the ball. Two types of information can be distinguished: procedural knowledge concerning action (for example, how to move the ball) and environmental knowledge concerning the state of objects in the environment. Both types of information are relevant to the agent's control, but only environmental knowledge need be included in its representation of the domain (procedural knowledge is used to design the agent's controller). The world state is a vector of all the information deemed relevant to the agent's control. In the RoboCup Simulation League, for example, the world state is the 136-dimensional vector specified below: The agent's position. Player i's position (21 players). The ball's position. The agent's velocity. Player i's velocity. The ball's velocity. The agent's heading. Player i's heading. The agent's head direction. Player i's head direction. It is up to the agent to then determine the content of the world state during operation. However, it may not be possible for the agent to do this unequivocally as sensor readings can be ambiguous or erroneous, and purely causal models can be inaccurate. In general, an agent may consider a probability distribution over possible world states. This is the agent's own belief about its external environment, and is governed dynamically by what the agent is able to infer through observa-tion (interpreting sensations using sensor knowledge) and prediction (using causal knowledge to judge future states). In real environments, neither is sufficient in and of itself because the agent's knowledge is inaccurate and sensor data is unreliable, but performance is greatly improved by combining both of these sources. 2.3.1 Markov State Traces If the state of the world changes over time, there is a trace of agent beliefs corre-sponding to the world's evolution. Equipped with knowledge of its causal evolution, an agent can use a mechanism of prediction to extend and correlate hypotheses over time. To make this prediction computationally feasible, the agent can as-sume that the trace possesses the Markov property: the evolution of a vector xa G field C R 2 xpi G field C R 2 xb G field C R 2 va G R 2 : \\va\\ < SPmax ^ E R 2 : \\vpi\\ < SPmax vbeR2: \\vb\\ < sbmax 9a G [0, 2TT) C R 0P* G [0, 2TT) C R ())a G [-(pmax, 4>max] C R G [-(pmax, (Pmax] C R 17 is Markovian (in the first order) if its value is independent of its history, given its immediate predecessor. This can be written as X t + i ± X 0 : t - i | X t — which means P r ( X i + i | X o ; i ) = Pr (X t +i |X t ) — or graphically as in the DBN of Figure 2.4. Figure 2.4: The first three time steps of a special dynamic belief network called a "Markov chain." The network depicts the independence assumptions behind first-order Markov vector evolution. An agent can hypothesize the future state of a Markovian model by itera-tively advancing a prior distribution through a relatively simple stochastic evolution model: the locality imposed by the Markov property can significantly reduce pre-diction complexity. However, it is always important to consider whether a model is consistent with the environment's true dynamics. For example, an object's position alone is not likely to exhibit Markovian evolution; whereas in conjunction with its velocity, applied force, coefficient of friction, etc., the Markov property is more plau-sible. Any domain can be modelled as Markovian by using an appropriate world state (ultimately, the entire history can be included in the state description). 2.3.2 Imperfect Perception In general, agents are not equipped to fully observe their environment. When some part of the state is ambiguous or hidden from the agent, it must be inferred through prediction (as described in Section 2.3.1) as well as perception. Perception is the process of mapping the world state, evident in various stimuli, into percepts. This mapping depends on the type and quality of the agent's sensors but is generally neither one-to-one nor deterministic: percepts represent equivocal information about the state of the world. In general, perception is a stochastic process described by a perception model that is used by an agent to describe the relation between percepts and the world state and to make prioritized judgements regarding observations. As a part of the perception model, it is common to assume that percepts depend only on the current world state (Yt _LXo :t-i, Yo :t-i | X t ) . This assumption is often accurate and can significantly reduce the complexity of interpreting percepts. Combining the Markov property with this assumption, produces the familiar hidden Markov model, shown in Figure 2.5. Despite these assumptions, however, realistic models are complex and per-18 Fi gure 2.5: The first three time steps of a special dynamic belief network called a "hidden Markov model." The model describes stochastic observations, Y t , of an evolving vector, X t , through a perception model, P r ( Y t | X t ) . forming exact inference can still be intractable. This problem has motivated the use of approximate models (obtained by fitting the model to a parametric distribution of lower order) that allow computable inference. Common examples are Gaussian distributions [Kal60], delta functions (as described in Section, and neural networks. 2.3.3 Self-Local izat ion The theory in Section 2.3.2 describes how to filter multiple observations over time where, as the state changes, the percept trace may contain new information that increases the belief accuracy. A similar effect is produced when multiple sensors observe the same object at the same time; because different sensing modalities may contain different information, combining these observations often decreases the ambiguity of any individual sensor. A special case of observation using multiple percepts is self-localization using landmarks. Landmarks are objects whose position is assumed to be known by the agent. For this assumption to hold, these objects are usually chosen to be stationary, easily identifiable objects (for example, the flags (disks), goals, and field lines shown in Figure 2.6). Observing landmarks with agent-centred sensors is equivalent to observing the transformation between the agent and the world, i.e., the agent's position and orientation or pose. For example, one range observation constrains the agent to lie on a circle centred on the landmark; the range to two different landmarks constrains the agent to lie on the intersection of two such circles (one of two points); etc., as shown in Figure 2.7. Estimates of this type have been commonly used for localization in the RoboCup Simulation League (RSL) where each range measurement, i, is aggre-19 Figure 2.6: Landmarks locations in the RoboCup Simulation League domain. Figure adapted from [CFH+02]. Fi gure 2.7: The range to two landmarks constrains the agent to lie on one of two locations, which can be disambiguated using other landmarks. Range measurements with bounded noise constrain the agent to the shaded region. 20 gated with noise of a known bound dj. It is also common to assume that this noise is uniform over the area of intersection. This area can take on complicated shapes (depending on the locations of the two landmarks and the agent) so it is typical to fit the area to a two-dimensional, symmetric Gaussian with variance -^(df + d\), and then to use this variance to weight the estimate [dBK02]. The agent could benefit from a more complicated model of perception. Al -though there has been minimal application of Bayesian filtering to the RSL domain, [dBK02] implements a Bayesian filter for agent position (but neglects filtering any other dimension of the state). They model a localization sensor using two cues: 1. the weighted average of every possible two-landmark estimate as described above, and 2. bounds on possible locations (obtained by intersecting the rings of possible agent locations from each landmark range observation). Their localization sensor is then a uniform distribution whose domain is a square centred on the two-landmark estimate intersected with the bounds on possible locations, as illustrated in Figure 2.8. Although this uniform sensor model yields a tractable, exact Bayesian filter, they use an approximate filter based on a sampled distribution. Although this is more computationally expensive and less accurate than the exact filter, they report highly accurate beliefs in pose [dBK02]. 1 * \ i \ i \ Two-landmark estimate s . 26cm / * / i Figure 2.8: Localization sensor described by [dBK02]: the "Two-landmark Uniform" localization sensor models the agent location as being uniformly distributed within the shaded area (the intersection between a square centred on the two-landmark estimate and the bounds on possible locations). 21 2.3.4 Identifying Objects When an agent observes multiple, independent objects, it is presented with a data association problem: which observation corresponds to which element of the state. In many cases, sensors are not able to easily identify the object they perceive, yet it has been observed that a lot of these ambiguities can be resolved over time by considering causality [Int99, BSF88, RH98]. Specifically, an agent can correlate the modelled equations of dynamics with observations to maintain multiple hypotheses over time and eliminate hypotheses that become sufficiently improbable. Classi-cal multiple hypothesis tracking represents multi-modal distributions using a set of Kalman filters, each corresponding to an association between a mode and an observation [BSF88, CH96]. Alternatively, one could use a sample-based represen-tation or some hybrid. In practise, an agent is usually able to eliminate competing hypotheses quickly. 2.3.5 Modelling the Behaviours of Other Agents Plan recognition is the inverse of planning — not concerned with determining op-timal behaviour, but of inferring an agent's conduct influencers after observing its action. However, these inverse problems often share a common model of behaviour: a probabilistic policy6. A policy is a general model of action selection — map-ping state-action pairs into a desirability measure used to select action. This has a direct Bayesian interpretation that can be used to elaborate the evolution of the world state, as is done in Figure 2.9, providing a useful framework for modelling behaviours: plan recognition can be posed as probabilistic inference in this type of model. This section will present several such models with varying generality. The hidden Markov model (HMM), shown in Figure 2.5, is a general dy-namic extension of mixture models that can model any distribution with arbitrary dynamics [Rab89]. Therefore, there is nothing theoretically problematic with using HMMs for modelling general behaviour, for example by expanding the world state description to include some characterization of action. However, the time and sam-ple complexity of HMMs are exponential in the dimension of the state space — the so-called curse of dimensionality. This makes inference impractical, in general, and subjects learning to severe over-fitting problems as training data is unlikely to rep-resent the full structure representable by the model. These issues are critical when 6 Alternatively, one could recognize behaviours using deterministic, formal descriptions of plans [Gre69, MH69, LRL+97]. The power of these models, however, do not translate well to situated systems which require physical grounding. The case for probabilistic inference in plan recognition is further argued in [CG93]. 22 Figure 2.9: A model of an agent acting (with action A) in its environment (described by the world state X). The evolution of the world state is characterized by the agent's policy, Pr (A t |X t ) , and the environment dynamics, P r ( X i + i | A t , X t ) . modelling multiple classes of action sequences, which can dramatically increase the state-space despite their inherent structure. Indeed, a general supposition of plan recognition is that behaviour is mod-ular and reuse of particular action sequences are inherent properties of most agent domains: complex sequences can be described using a sequence of piecewise simple sequences with simple interdependencies. Nonetheless, intuitive constraints based on such hierarchical or modular structure of the system cannot be easily exploited by the H M M . Consequently, HMMs alone are only suited for modelling simple, individual behaviours [HVOO]. For more complicated cases, solutions can sometimes be obtained by dis-cretizing the space into piecewise-tractable sections governed by intermediate-level abstract descriptions such as goals, plans, and abstract policies. The most obvious discretization may be to model this type of behaviour using a mixture of HMMs where the mixture component is allowed to transition upon completion of an indi-vidual H M M [Smy97, HoeOl, NH99, KA94] (as shown in Figure 2.10) or to transition with the state vector (a switched state-space model [PRM01, PFH99, PRCM99], as shown in Figure 2.11). An alternative is the hierarchical hidden Markov model (HHMM) which is specifically designed for domains that exhibit hierarchical structure with dependen-cies at multiple scales [FST98]. The HHMM is a stochastic automaton with three distinct types of states. Production states emit a single observation. Abstract states emit a sequence of observations governed by a sub-automaton. No horizontal transition may be made without first entering a sub-automaton. Termination states return focus to the parent, abstract state. Because the termi-nation of each sequence class is modelled explicitly, the model can be trained 23 Figure 2.10: A mixture of hidden Markov models handles dependencies at multiple scales by explicitly modelling the class type responsible for each sub-sequence. Class transition occurs after the termination of the sub-HMM, which is modelled using a determined sequence length. Inference based on hard-coded sequence termination is easy (although it is also sensitive to noise in sequence length) but learning can only be done on pre-segmented sequences. Shown are two sequential classes with three time steps each. Figure 2.11: Three time steps of a switched state-space model. A switched state-space model alleviates the strict, fixed-width requirement of the mixture of HMMs model, allowing a learning algorithm to classify the space based on observed evi-dence. Each subsequent classifier emits a single observation, as opposed to a fixed-length vector, because the class variable is allowed to transition at every time-step. However, there is no specific mechanism for forcing the top level to change more slowly than the bottom level, or to change based on other cues such as the satisfac-tion of particular goals. 24 from unsegmented sequences. As sub-automatons are called recursively, the calling context is maintained in a depth-limited stack. This method of handling recursion is computationally efficient but bears a less powerful model than those which allow infinite recursion such as stochastic context-free grammars. However, a significant advantage of the HHMM is that it can be represented as a DBN thereby reducing inference complexity from cubic to linear in the number of states [MP01]. The resulting generic HHMM, represented as a DBN, is shown in Figure 2.12. Figure 2.12: Three slices of a dynamic belief network representing a hierarchical hidden Markov model. The model is very much like the switched state-space model (Figure 2.11) save the random variables F l which model transitions within the se-quence hierarchy. The abstract hidden Markov model (AHMM) is an adaption of the HHMM tailored to the domain of single-agent policy recognition [BVWOO, BVW01]. The space is discretized by modelling multi-scale behaviours with abstract policies [SPS99] Abstract policies map the state of the world into other abstract policies, instantiat-ing more and more refined policies until a final concrete action is effected. Computational efficiency is achieved by assuming that the state of an ab-stract policy depends only on the state of its immediate parent policy, and its ter-25 mination probability depends only on the current policy, not on the whole context. This substantially reduces the number of parameters and allows for more efficient approximate inference. However, this model has not been tested in multiple agent environments and there is no clear notion (either in the model or in the theory of abstract policies) governing the interactions of multiple agents. Simply replicating the model is likely too general a solution to yield tractable or useful results. 2.4 The RoboCup Simulation League In the study of situated agents, a consideration of their environment and task do-main is of equivalent value to their architecture and internal processes. This thesis concentrates on agents situated in the RoboCup Simulation League (RSL) domain. [Mac93] proposed robot soccer as a suitable domain for studying situated agents, since the domain required them to overcome the restrictive assumptions of so-called "Good Old Fashioned Artificial Intelligence and Robotics." This idea was later posed as a formal challenge to artificial intelligence researchers and — with the advent of a yearly showcase of their successes (the Robot World Cup [KAK+97]) — has become a paradigmatic domain for analyzing and constructing situated, multiple-agent systems. Robot soccer is suitable because soccer agents are situated in a complex and highly dynamic environment. Successful operation requires careful consideration of the agent's percept trace and development of complicated individual policies and multi-agent strategies while acting quickly enough to properly interact with the ball and other players. Because these factors are so crucial in this domain, the properties of the control algorithms can be clearly observed and compared. The RSL system is composed of multiple, distinct software programs that in-teract through a lossy communication channel: the Agent programs that implement the agent controllers for 22 agents, and the SoccerServer program that implements the simulated environment including the ball and referee. The SoccerServer mod-ule (which can be modelled in Constraint Nets as in Figure 2.13) simulates the effects of action on its world state and provides stimuli to each agent [Nod95, CFH +02]. It is critical for each agent that is interacting with the SoccerServer to have sufficient knowledge about its operation. In particular, knowing 1. how action is effected in the simulated environment, as well as 2. the relation of the stimuli to the environment state allow the agent to best reconstruct the world state. This knowledge can be de-rived from the SoccerServer 's internal timing, rules for effecting action, stimulation 26 mechanism, and equations of dynamics. h e a r Q O h e a r valid action C H world state Simulate o o — m evious P previous world state t > Stimulate body • if *\ ,(( Stimulate vision J sense_body ActionFilter sense_body event Clock Figure 2.13: The Constraint Net module modelling the SoccerServer. 2.4.1 Object dynamics The Simulate module determines the next state of the ball, each agent, and the game. The state of the game (based on the rules of soccer) is reported to each agent through a hear percept. Al l other objects in the world adhere to the same parametric motion model described by the following equations of discrete dynamics, where at time step t, xt and v% are the position and velocity of the object respectively, ft is some external force applied instantaneously to the object, and /J, is an object-specific parameter describing collective friction [Nod95, CFH+02]. xt+\ -xt = vt (2.17) Vt+i ~ vt = - M vt + — ft+i (2.18) m 2.4.2 Stimulation The SoccerServer provides both proprioceptive and visual stimuli to each agent. Proprioceptive stimulus is provided through the Stimulate body module, and con-tains • the agent's current stamina, • the agent's current effort level, • a count of effected actions, 27 a measurement of the agent's current head direction relative to its body, y^a, based on its actual value, cf)a, in radians7 according to Equation 2.19, and y*a = \r - afol (2.19) • a measurement of the agent's previous velocity, yv , based on its actual value, va, according to Equation 2.20. [ 1 0 0 ( 1 - ^ 1 1 - 4 1 100(1 - n) z Zva 7T 360 (2.20) Where a Z o = a cos b sin b Visual stimulus is provided through the St imulate v i s i o n module by a typical8 sensor that measures an object's position relative to the observer [Nod95, CFH+02]. The precise mapping from world state to percept for the position any object, o, is defined by Equation 2.21, where q is a SoccerServer parameter whose default is 100 for landmarks and 10 for all other objects. 1 To 10 exp < q ln\\x° - xa\ Z 7T 360 (2.21) In addition to relative position, visual stimuli also contains a measurement of the object's identification. The amount of identifying information for players varies inversely to the distance between the observer and the player, allowing the observer to identify a player uniquely, partially, or completely ambiguously: • If the player is within 20 meters of the observer, the observer perceives com-plete identification: both the team name and number of the player. • If the player is between 20 and 40 meters from the observer, the probability that the observer will perceive the player's number decreases linearly from 1 to 0. 7A11 angles in this thesis are measured in radians. 8For example, many fielded robots with multiple cameras use knowledge about the pose of each camera along with correlations between individual stimuli to obtain the range and bearing of each captured ray of light [HZ00, EHL+02]. Alternatively, transit time of active light or sound waves can measure range in any particular direction. 28 • If the player is between 40 and 60 meters from the observer, the observer does not perceive the player's number and the probability that the observer will perceive the player's team name decreases linearly from 1 to 0. • If the player is further than 60 meters from the observer, then the observer does not perceive any information about the identification of the player. 2.4.3 Internal Timing SoccerServer simulates continuous dynamics using a discrete algorithm governed by an internal clock. The Clock module produces a sense_body event every 100 milliseconds, and a see event every 150 milliseconds. These events are used to schedule the dynamics simulation and agent stimulation in the following manner: • Every 100 milliseconds (immediately preceding a senseJbody event), the SoccerServer simulates the environment dynamics. • Every 100 milliseconds (on a sense_body event), the SoccerServer provides proprioceptive stimulus to each agent. • Every 150 milliseconds (on a see event), the SoccerServer provides visual stimulus to each agent. Because of the inherent frequency mismatch, three distinct cycle types can be defined: one where the visual stimulus is provided in the beginning of the cycle, one where the visual stimulus is provided in the middle of the cycle, and one where no visual stimulus is provided (see Figure 2.14). sense J>ody event see event Cycle 1 Cycle2 Cycle 3 B g Figure 2.14: Event timing within the SoccerServer module. Cycles types 1 through 3 represent cycles where the visual stimulus is provided in the beginning of the cycle, in the middle of the cycle, and not at all respectively. 29 2.4.4 Action Failure The SoccerServer accepts action requests from agents throughout each cycle, but only simulates their effects every 100 milliseconds. Action duration is modelled by the A c t i o n F i l t e r module that restricts each agent to one action in a single cycle; if more than one action is requested, all subsequent requests are ignored — i.e., the action fails (see Table 2.1). It is therefore important that the agent only try to act once per cycle to ensure that all of its action requests are honoured. Table 2.1: SoccerServer rules for effecting action based on the reception times of sequential action requests. An action request is considered "on time" if it is received before simulation. Previous request Request Action On time On time Effected this cycle Late On time Not effected (action failure) On time Late Effected next cycle Late Late Effected next cycle 2.4.5 Interacting with the SoccerServer Real agents are situated in a dynamic world and the success of their activity is directly dependent upon the synchronization of sensors, actuators, and deliberation with the environment. This must be achieved in the face of bounded computational resources and unpredictable situations. Although the RSL environment is relatively structured, synchronization has been the topic of much research (see [BPH00] for a survey of popular solutions). An ideal agent senses the environment, updates its belief in the world state, decides on a course of action based on those beliefs, and effects action all before the environment changes state in such a way as to cause the action or decision to fail 9. Such behaviour is sufficiently reactive, and can be defined in the RSL with the following temporal precedence constraints on the system. 1. The first constraint is strict: the SoccerServer must receive one action request 9One can distinguish between action failures (where the agent attempts to perform the action and fails) and decision failures (where the agent succeeds in action, but the action choice is based on an inconsistent model). Depending on the situation, both can be equally dangerous. 30 before it begins simulating the dynamics. If the SoccerServer receives the action request after it simulates the dynamics, then the agent will remain idle until the next simulation cycle. If the SoccerServer receives more than one command in a single cycle, the second and any subsequent command is ignored, as described in Table 2.1. 2. The second constraint is soft: the agent should command action only after sufficient sensing and deliberation. If this constraint is violated then the agent is basing its action on less reliable beliefs. For example, the CMUnited98 agent acts at the beginning of each simulation cycle [SVR98]. This virtu-ally ensures the first constraint by abandoning the second: all decisions are based on prediction from the previous cycle and not from current sensor data. Though this method has proved successful in this domain where the dynamics are structured, well known, and easy to simulate, this approach is ill-advised in general. These constraints can be satisfied by matching the agent's sense-act loop to the environment's natural frequency. In the case of the RSL, an agent must determine the SoccerServer cycle (thereby determining when action is effected, when stimulation is to be expected, and how much deliberation is available) and coordinate with it by using expectations about its deliberation limitations. This is meta-level procedure is called deliberation scheduling [BD94, ZR96, DBS94] and the resulting agent is said to be synchronized with the SoccerServer in that it has determined and adapted to the environment's natural frequency. Asynchronous operation can result in irrelevant sensing, inconsistent repre-sentations, and delayed, failed, or inappropriate action — serious problems which can affect even the most sophisticated skills and plans [BBS+00, DorOO, SSOO, PouOO]. The environment's natural frequency can be deduced through sufficient inter-action [HPHOO, OPS99] but the communication protocol in the RSL is too restrictive to support such standard solutions. The deduction must come instead from accurate system specification [Cri89]. However, simple specifications have been adopted in the past that require unjustified assumptions. For example, two strategies have been described that each exploit different knowledge about the system [BPHOO, MM02]: Internal synchronization employs the belief that the SoccerServer cycles at 10 Hz. Under this specification, it suffices to act every 100 ms to satisfy the first constraint of correct action. However, the second constraint is largely overlooked. 31 External synchronization employs the belief that proprioceptive stimulus is pro-vided at the beginning of each cycle. Under this specification (and the assump-tion of minimal communication delay) the SoccerServer cycle can be inferred and an appropriate action time chosen. In practise, however, their intended behaviour is complicated by fluctuations in the SoccerServer cycle's periodicity caused by clock inaccuracies10 and fluctua-tions in perception times caused by varying communication reliability 1 1. Timer variation over one minute Network transmission variability Elapsed time (seconds) Elapsed time (seconds) Figure 2.15: Typical fluctuations in the true and observed SoccerServer cycle mea-sured by timing the production and reception of sense_body events respectively. When system resources are strained, the timer provided by its operating system causes the SoccerServer's cycle to fluctuate (left). Additionally, network variation (right) causes the observed cycle to fluctuate as the aggregate of both the timer and network delays. There exists a theoretical framework for understanding the effect of these complications that shows how internal synchronization, while not very sensitive to network variation, is unlikely to act at the optimal time [MM02]. External synchro-nization can perform much better because the proprioceptive stimulus can be a good 1 0 The quartz oscillations that computers use to keep time are inaccurate. Typically, a clock can deviate from real time by as much as 30 milliseconds over a 10 minute pe-riod [OPS99]. Also, since the system is typically not run over a real-time operating system, interval timers tend to overrun when the processor is overly taxed. These anomalies occur independently across the system which, during the course of a full game, can incur timing deviations that constitute significant portions of the simulation cycle period. 1 1Varying network traffic causes unobservable and unpredictable communication delays. Further, since the system uses a lossy communication protocol, messages can arrive out of order or be lost entirely in the transport layer. Therefore, the sensitivity of a synchronization method to such communication faults is extremely relevant. 32 indicator of a new simulation cycle when communication delays are reliably small. As transmission delay increases, however, the likelihood of satisfying the above con-straints decreases rapidly. This is particularly evident with large communication delays where, as shown in Figure 2.16, external synchronization can perform worse than internal synchronization. Probability of a Correct Action 0.9 0.8 o G 0.7 < u g o , o u no.s o j? 0.3 g a. 0.2 0.1 0 0 10 20 30 40 50 60 70 80 90 100 Expected Communication Delay, t, in milliseconds Figure 2.16: A comparison of the Internal (solid) and External (dashed) synchro-nization algorithms made using the theory introduced in [MM02]. A correct action satisfies both of the constraints described in the beginning of this section. 2.5 Summary Although an agent may be considered as nothing more than a function from its per-cepts and history into action, this chapter has expanded on the necessary complexity of such a function for agents in realistic domains. This complexity has hampered de-velopment of artificial agents, as each constituent aspect seems to require significant consideration. • Perception and modelling, which combine to manage the agent's internal repre-sentation, must adequately describe the state of the world and its inhabitants. This chapter has introduced theoretically acceptable ways of handling vast possibilities left after considering ambiguous percepts using probability theory and approximate techniques. - Internal synchronisation • External synchronisation | 33 • Deliberation is the process of making goal-oriented decisions based on the agent's internal representation of the world. Systematic search of the options can solve deliberation with unlimited resources. However, in realistic domains the problem becomes one of optimally allocating the available resources to obtain the best decision possible. This chapter has introduced these issues, and refers to various architectures that address this issue differently. Ultimately, these constituents must be integrated efficiently into an archi-tecture that allows proactive agents to operate in demanding environments with sufficient reactivity. This chapter has also described the RSL domain, a challeng-ing multiple-agent domain well suited to the study of the facets of agency. The remainder of this thesis will present novel considerations and developments in these areas. 34 Chapter 3 A Constraint-based Agent Archi tec ture The typical description of a single physical agent is as the composition of a body (which acts as the physical interface to the agent's environment) and a controller (which polls the body for its interpretations of the environment, and issues com-mands that evoke action). The precise composition is defined by a particular agent architecture, whose responsibility is to provide certain assurances regarding the agent's properties, such as reactivity, proactivity, taskability, etc.. For example, a human agent has a body, consisting of bones, nerves, muscles, and various organs; and a controller which, although usually considered to be an abstract entity, is believed to reside mainly in the nervous system. The architecture is described, as far as it is understood, by great volumes of anatomy and physiology (and at a finer scale by our genome, chemistry, physics, etc.) along with the whole of psychology and cognitive science. This chapter presents a novel architecture for an agent situated in a demand-ing environment, i.e., an environment that requires the agent to make complicated, time-critical decisions. The primary conflict is in attaining rational decisions, which may require significant resources and time, while remaining sufficiently responsive. The conflict is addressed by careful management of the available resources using the intentional stance to reduce the requirements of deliberation, and an adaptive de-liberation scheduler that allocates resources in a manner appropriate to the current task and environment. The mechanism of resource allocation is intimately connected to the fun-damental design paradigm of this architecture: each of the controller's particular responsibilities are tackled asynchronously, independently, and in an interruptible manner. The result is an anytime-available action command and the longer the 35 agent devotes to deliberation — then the more possibilities can be filtered and the more optimized the action command becomes — the more rational the resulting action will be. The architecture is distinguished by four notable processes organized into CN modules in a layered topology, as shown in Figure 3.1. The precise layer bound-aries are not specified, but an operator hierarchy (from the bottom, most reactive component to the top, least reactive component) is inherent. 8(0) previous intention 5(0) Planner Means-ends reasoning filtering distribution 6 6 time step confirmed e n -action w Scheduler command command event percept event percept previous distribution Filter model parameters Figure 3.1: A Constraint Net module of the constraint-based architecture. • The Scheduler module, the most reactive process, monitors the agent's con-troller interface to ensure that actions are being effected successfully. By scheduling the command event directly, the Scheduler indirectly controls the resources used by the other modules by controlling how long the agent devotes to deliberation. • The F i l t e r module uses the agent's percept trace as evidence to make in-ferences using a probabilistic model of the environment and its inhabitants. These inferences constitute the agent's internal representation of the external world. • The Planner module uses the inferred state of the environment to reconsider the agent's set of intentions. Particular intentions can be adopted, abandoned, 36 or may persist in the set. The combination of adopted intentions then focuses further deliberation. • The Means-ends reasoning module uses the agent's current belief about the state of the environment to optimize an action command that suits the adopted set of intentions. Each of the above modules are elaborated upon separately in the following sections. 3.1 Adaptive Resource Scheduling Because planning and means-ends reasoning require resources and take time, delib-eration as a whole directly opposes the reactivity requirements of situated agents. This opposition can be understood by considering the cost of deliberation and real-izing that, at some point, the cost exceeds the benefit in increased decision quality obtained from further deliberation. As an example, consider a soccer-playing agent who has received the ball and must decide whether to pass or shoot the ball. Soccer is a highly dynamic domain, and the chances are that if the agent devotes too many resources to deliberation (foregoing interpretation of novel percepts, for example) or allows too much time to pass during deliberation (in the attempt to derive an optimal decision) it will find that the quality of the implemented decision is lower than expected. For example, opponents may have used the time to block the pass or shot under consideration. Clearly, a different definition of optimality is required for decisions; one that directly considers the effect of used time and resources on the quality of a decision. A process that actively considers this type of extension is referred to as a meta-level reasoner because it reasons about when and how to reason. The architecture described in this chapter relies upon a meta-level reasoner, implemented within the Scheduler module, to achieve the desired responsiveness for the agent. The mechanism of adaptation is the scheduling of the agent's com-mand frequency. Because of the asynchronous and interruptible operation of the architecture, the scheduling of a command implicitly constrains the resources used by higher layers. The result is that the Scheduler module can select any time to issue the best command that has been collectively designed by the entire controller. The meta-level reasoning itself is based on an optimistic principle: to deliber-ate as much as possible, while still producing action commands that can be effected successfully, and are done so in a situation that is still appropriate to the decision. If the meta-level reasoner can determine whether actions are succeeding or failing in 37 this respect (due to insufficient reactivity, for example) then the Scheduler mod-ule can modify the command frequency, thereby affecting the resources available to the other modules, in response. In most cases, this principle induces stable and responsive command scheduling that is able to adapt to the evolving demands of the agent's task and environment. 3.2 Interpreting Observations An agent is a function from sensation into action, albeit a complicated function. Internally, stimulus from the environment is converted into percepts by the agent's sensors, before being interpreted by the agent controller. It is up to the agent to then reconstruct the likely state of the environment from its perceptual data. The reconstruction is the agent's internal representation, and is used to solve problems and make decisions concerning the real world. The internal representation used in this architecture is based on a probabilis-tic model of the agent's domain: a collection of random variables that collectively describe the relevant state of the environment. The model is derived by choosing the constituent variables as well as specifying their dependencies upon each other and time. For example, Section 2.3 describes a number of typical models, and a new model that considers the behaviours of multiple, observed agents using the intentional stance is described in Chapter 4. Percepts are evidence about the true state of the environment, but can. be ambiguous, incomplete, and noisy. Nonetheless, the agent is able to use the proba-bilistic model to infer the likelihood of different possibilities based on this evidence as described in Section 2.2. In this way, the F i l t e r module maintains consistent beliefs about the environment using a Bayesian filter and its particular model. The The distribution over world states is then used by the agent to influence its decision process. 3.3 Deliberation Deliberation is an agent operation wherein the internal representation is considered to arrive at a decision concerning action. Within the architecture described in this chapter, deliberation is performed concurrently by two separate processes: planning and means-ends reasoning. The highest level of deliberation is planning: the consideration, evocation, and revision of intentions. This process, carried out in the Planner module, is fashioned on the intentional stance view of agency where intentions are represented 38 as appropriate constraints (formally, as a list of conjunctions of constraints) that are specifically designed to achieve some proposition or goal. Once the agent has committed to an intention, the agent pursues it until it concludes that it has been satisfied or that it will never be satisfied and should be abandoned. The agent then only considers options that are consistent with its currently adopted intentions, reducing decision complexity. The lower level of deliberation is means-ends reasoning: the selection of spe-cific action designed to satisfy the agent's intentions. Performed in the Means-ends reasoning module, this process optimizes an action command to achieve the in-tended set of constraints. The optimization is a gradient ascent traversal of the world's state-space, performed using available actions, into regions of satisfiability. The list of conjunctions of constraints that specify an intention can be viewed as a plan where each list element is an abstract action designed to achieve a particular task. However, this is more general than an ordered sequence of action commands: in this scheme, individual action commands are constructed online based on the success of previous actions and the believed state of the environment, while the agent actively considers alternate options. In this way, the intentions aggregate with a constraint solver to define the agent's particular policy. 3.4 Implementation A constraint-based agent architecture has been introduced in general. In order to design a functional agent controller based on this architecture, however, the opera-tion of many of the processes described in the previous sections must be specified in more detail. Chapter 4 introduces a specific model of a multiple-agent environment that can be used in the architecture, particularly by the F i l t e r module; and Chapter 5 will provide the specifics required to implement this agent for the RoboCup Simu-lation League domain. Ultimately, Chapter 6 will explore whether the combination of reactive and deliberative components, under control of the resource scheduling meta-reasoner, are able to successfully operate in a highly dynamic and interactive environment. 39 Chapter 4 M o d e l l i n g Environment and Agent Behaviours This chapter is concerned with empowering an observer to understand the dynamics of an inhabited environment. The observer may be a passive witness who analyzes percepts from video or some other sensor without impacting the environment; or be an embedded actor who uses its analysis to influence the system. However, the observer is assumed to exists independently from the observed agents in terms of its goals and actions (i.e., the observer is not a member of the observed group). The actual use of such empowerment will not be addressed at all in this chapter, except to posit that is in fact useful, for example, to better understand a group of other agents' actions1. To "understand" the dynamics of an inhabited environment, the observer can compare percepts with a probabilistic model of these dynamics and infer various hid-den properties. In this way, perception evolves beyond direct sensory stimulation to be the dynamic inference of useful characteristics. This chapter will introduce a specific model that enables an observer to better interpret a multiple-agent environ-ment. This model is shown in Figure 4.1. Previous work has shown that a set of variables' dynamic evolution can be theoretically modelled using a hidden Markov model (HMM). In the model described in this chapter, the H M M structure is borrowed to model the fundamental elements of the world state, such as the position and velocity of objects. While this seems to work well for such simple vectors with well defined dynamics, enlarging the vector 1 Although an agent is defined as an entity that acts in its environment, the robustness and appropriateness of that action is intimately connected with the evolving state of the environment and its co-inhabitors. Competent analysis of these dynamics can be an impor-tant component of a decision process. However, the model's full application in a decision process is not explored in this thesis. 40 to model complexities that arise out of the exogenous actions of other agents, for example, makes learning and inference in the H M M intractable or inaccurate. Therefore, to postulate other information the model must be extended with more restrictive dependencies. This extension is based on the hypothesis that ob-served agents are intentional — i.e., possess beliefs, desires, and intentions in accor-dance with the psychological theory of intentionality and whose inference serves as an abstract behavioural description. This structured elaboration imposes a particu-lar independence topology on the modelled variables, partially shown in Figure 4.1 and further elaborated upon throughout this chapter. The underlying motivation of this model as a whole is the assumption that abstraction of an observed agent's policy to the facets of intentionality (those de-scribed in Section 2.1.2) will attain computational tractability while retaining an appropriate amount of modelling accuracy. The remainder of this chapter will discuss the random variables used in this model along with their dependencies. Section 4.5 develops an inference algorithm for the model, and theoretically analyzes the inference complexity. In Section 4.6, a method of predicting model parameters making use of particular observations is provided based on the Expectation Maximization algorithm. 4.1 Agent Beliefs Intentional agents base their decisions on their own beliefs - an internal representa-tion of the world state, constructed from particular perception and prior knowledge. The model in Figure 4.1, as a whole, constitutes the observer's beliefs: it is a probabilistic internal representation of the state of the world, including the mental attributes of a group of intentional agents (excluding the observer). The model assumes that these observed agents are intentional themselves and, hence, posses beliefs of their own. This is modelled by the random variables B t which are the observer's beliefs about the observed agents' beliefs. Agents from the same group are assumed to have the same perceptual pro-cess (the mapping of percepts to belief about the external environment). Of course, these agents do not have the same beliefs, as they each hold a different perspective in the environment. In general, agent beliefs may be logically inconsistent both internally (among components of the belief) and externally (with respect to other agents or the true world state). However, the observer should assume some range of acuity in the observed agents' perceptual processes in order to simplify parameter estimation and inference. In the RSL, each particular world state provides deter-ministic percepts to each agent (percepts are ambiguous but are not stochastic). A 41 Figure 4.1: A hierarchical, probabilistic model of multiple, intentional agents acting in the world, represented as a dynamic belief network. The bottom of the figure may be recognized as a hidden Markov model of the world state. The world state is dependent at each time step by its history as well as the actions, A ^ , of all p agents. How each agent arrives at an action is based on the theory of intentionality: each agent has an individual intention, 1^ , and the group as a whole has a joint intention, JV Further, each agent has a belief about the state of the world, B^ 1, which is used to determine the satisfaction or abandonment of individual and joint intentions (S^ and Tt respectively) as well as their adoption. simple mapping from this percept is a reasonable lower bound on the perceptual acuity. Alternatively, an agent may filter its percepts (for example with a Kalman filter or another approximate Bayesian filter) with increasing accuracy up to the optimal filter. An acuity range is enforced in the model by specifying a set of logical propo-sitions concerning the environment, and allowing agents to bet on their truth values. The observer can then learn the similarities in how its beliefs about the environ-ment (X) are formed and those of other agents (B 9 ) . For the RSL, appropriate environmental propositions include the following: 42 • a discretized position of each agent, • distances between opponents, • distances between opponents and their nearest teammate, • a measure of the ball's interceptability if passed from the ball carrier to each of its teammates, and • a measure of each opponent's ability to score with a shot to the goal. For other domains, beliefs may include other propositions, even modelling higher-order beliefs (e.g., beliefs about beliefs). Given a set of propositions, the cor-relation between an agent's beliefs and the observer's beliefs can be parameterized by a number of probabilities (one less than the number of propositions) under the as-sumption that each proposition is independent of each other and that the correlation is independent of the proposition's value. These probabilities, parameterized within the vector J3, fully specify the observer's model of perceptual difference between itself and the observed agents; as is described in Equation 4.1. 4.2 Agent Intentions The elaborated model is centred around the assumed intentionality of observed agents. One salient feature of this intentionality is that both individuals and groups commit to bringing about particular propositions (under a host of assumptions and constraints imposed by the theory of intentional systems, introduced in Sec-tion 2.1.2). Therefore, individual intentions are assumed for each observed agent q, and are modelled using the multinomial random variables I ' , whose domain cov-ers the commitments to all possible such propositions. The model assumes that each observed agent has exactly one intention. Although the details of the in-tended proposition may be unknown, its ultimate value is directly modelled using the boolean random variables Sj, which are true when the propositions are true or will never be true. The remainder of this section describes the dynamics of inten-tions, as modelled using these two variables: Section 4.2.1 describes how intentions are adopted and how they evolve; and Section 4.2.2 describes how intentions are dropped. When observing and trying to understand a group of intentional agents, as opposed to an individual, the group as a whole is similarly assumed to have a joint (4.1) 43 intention. Joint intentions are handled similarly to individual intentions by the analogous random variables 3t and Tt, which model the group's joint intention and whether it is (or never will be) satisfied respectively. However, joint intentions are hierarchically superior to individual intentions in that the group's joint intention can influence that of an individual. These specifics are detailed in the remainder of this section. 4.2.1 Intent Formation Although agents may intend to satisfy many propositions, all of these must be logically consistent. The multinomial intention random variables, l\, are indexes all possible sets of logically consistent propositions. Therefore, this unitary variable can be used to model multiple, concurrent intentions to unclarified propositions. Equivalently, 3t indexes into propositions that are used as joint intentions of the observed group. As is described in Section 4.2.2, an agent's intention persists until it is satis-fied or will never be satisfied. This is a different mechanism as used by the hierar-chical hidden Markov model described in Section 2.3.5, which allows more general stochastic transitions. However, the restriction simplifies parameter estimation and produces more accurate inference, if appropriate. The adoption of a group's joint intention requires mutual desire in a par-ticular proposition and mutual belief that the desired proposition is not satisfied. This requirement is modelled within the mapping J : B 1 : p —> J , which describes the distribution of likely joint intentions that will be adopted given a certain set of the group members' individual beliefs. J(6i:P) = P r ( J t + i |T t = T, B t 1 : p = b1:p) (4.2) The distribution J performs a vital role in the joint intention's dynamics, which are modelled by the conditional probability function in Equation 4.3. P r ( J i + 1 = j'\Jt = j, Tt = r, B t 1 : " = 5 1 : p) = ( f ' ' j ^ * T (4.3) IskU - j) else Individual intentions obey similar dynamics, except that their adoption is based on the agent's beliefs about the state of the world and the group's joint intention. This process is described by the intent formation models (parameterized by the mappings Ij : B —> I, one for each possible joint intention) which describes the initial distribution of intentions given particular joint intentions and beliefs. 44 Ijib) = P r ( I ? + 1 |B * = b, S? = T, J t + 1 = j) (4.4) Based on this distribution, individual intention dynamics are modelled by the conditional probability function in Equation 4.5. Pr ( I t V = t ' |Bf = 6, If = i, Sf = s, J i + 1 = j) = {f}^ * * (4.5) ^Ofe(« — i) else The modelled dynamics of joint and individual intentions exhibit context-specific dependence and the values of Sf and Tt can be viewed as topology selectors of the conditional dependencies. For example, the dynamics of individual intentions can be viewed in two simple cases indexed by the value of S f , as seen in Figure 4.2. This feature will be a significant factor in developing practical inference algorithms. Figure 4.2: Local structures depicting the reconsideration of (Sf = T) and the com-mitment to (Sf = _L) individual intentions. Dependence among random variables is context-specific with respect to the value of Sf . 4.2.2 Intent Abandonment Individual intentions are abandoned when the proposition that the agent has com-mitted to becomes true or the agent determines that the proposition can never become true. Sometimes this is contingent upon the group's joint intention, so that an agent can choose to terminate its individual intention if the joint commitment is dropped. This allows more general dynamics than hierarchical HMMs and abstract HMMs, who only allow higher-level HMMs to transition after lower-level HMMs terminate. 45 Individual intention persistence is modelled by the random variables Sj. Specifically, intentions persist while Sj = _L, and are relinquished when = T. The agent's decision to abandon a particular intention is modelled by the mapping Sj (6, r), that specifies the probability that a particular intention is satisfied if the agent has particular beliefs. S,(fe, r) = Pr(S? = T|I? = i, T t = r, B? = 6) (4.6) The differences between §j(6, T) and Sj(6, _L) characterize the agent's com-mitment to the group: conforming agents manage their individual intentions in di-rect correspondence to the group's joint intention (§i(o, T) = 1) while independent agents disregard the state of the group's joint intention entirely (Si(6, T) = §j(6, J.)). A group's abandonment of its joint intention requires mutual belief that the desired proposition is satisfied or will never be satisfied. This condition is modelled through the boolean random variables Tt and the mapping Tj(bi:p), that specifies the probability that a group drops their joint intention given a particular set of beliefs. Tjih-.p) = Pr(T t = T | J t = j,B]:p = b1:p) (4.7) 4.3 Agent Desires A notable exception to the explicitly modelled attributes is desire. Desire is the prioritization of options without consistency constraints, prior to commitment. The model does assume that agents have desires, but they are modelled implicitly as influences of the intent formation model. In effect, desires are integrated out during the model parameter estimation process. 4.4 Agent Action An agent's actions can only be perceived through changes in the steady-state dy-namics of the environments. These changes are interpreted using an action model that describes how actions effect the environment. For example, there are three fundamental actions in the RSL domain: Dash is an action that applies a force to the agent in the direction of its current heading. The Dash action effects only the player's position and velocity in accordance with the RSL equations of dynamics (Equations 2.17 and 2.18) with / = ma Z 9P, where a is the power parameter of the Dash command. 46 Turn is an action that applies an instantaneous angular moment to the agent that modifies its heading, 6P%. K i c k is an action that applies a force to the ball (if the ball is sufficiently close to the kicker). It is sometimes useful to specialize the Kick action. An observer may consider the Shot action (a Kick action which gives the ball a trajectory into the goal) the Pass action (a Kick action which gives to ball a trajectory towards another teammate) and the SelfPass action (a Pass to itself — i.e., to a location where the kicker is able to intercept the ball before any other agent). How an agent chooses an action (i.e., its policies) is parameterized by the mappings Aj, one for each possible individual intention. Ai(6) = P r ( A « f l | I ? + 1 = z,B? = 6) (4.8) 4.5 Inference The purpose of inference is to reason about elements of the world that cannot be observed, but are hypothesized in a particular model of the world (for example, the probabilistic model described in the previous sections of this chapter). The RSL agent will perform two inference tasks on this model, both of which are sub-classes of general Bayesian inference in dynamic belief networks: 1. Filtering sequential observations to infer marginal distributions over variables that are useful infiuencers of its decision process. For notational purposes, the set of modelled, hidden random variables can be referred to singularly, Qt d=f {J4,1}* A } * X 4 , B j * S,1* T J (4.9) Then forward inference, or filtering, is generally characterized by the calcula-tion of the distribution P r (Q t + T |Yo : 4) . This task is described in Section 4.5.1. 2. Inferring the most likely distribution across the entire network (i.e., for each time slice) given an entire sequence of observations. Generally, this inference is characterized by the calculation of the distribution Pr(Q t |Yo : r = VO-.T)- This task is described in Section 4.5.2, and is required to estimate the parameters of the model from evidence. 47 4.5.1 Forward Inference Forward inference, also called filtering, can be defined as the iterative computation of the filtering distribution Pr(Q t + T |Yo : t ) . The general prediction and updating steps of Bayesian filtering (described in Section 2.2.1) can be used to perform this calculation recursively: 1. The first step is the prediction of the initial filtering distribution which is defined in the semantics of the single time-step belief network as the parameter 2. The agent will then compute a posterior for the current time step based on an observation who influences the posterior as in Equation 4.10. 3. The agent will then predict a prior over the filtering distribution in the next time step, making use of the fact that future time steps are independent of past time steps given the current time step: 4. Steps (2) and (3) are then repeated indefinitely as long as observations are made (i.e., Vi € [0, T]). Afterwords, future predictions can be made by repeat-edly applying Equation 4.11 with Yo-t replaced with YO : T -In general, it is important that the agent make use of both the primary structure (the conditional dependencies portrayed by the belief network) as well as the secondary structure (the relational dependencies contained within the condi-tional probability functions themselves) during inference. This becomes especially important during the prediction step, step (2), since intentions persist until they are satisfied. Therefore, for possible worlds where Tt or S^ are true the corresponding joint intention or individual intention are simply copied to the new time step; for possible worlds where they are false, the new joint or individual intention depends on a reduced set of variables (see Figure 4.3). Forward inference has space complexity of 0(||Q|| xT) , and time complexity of 0( | |Q| | 2 x T), where ||Q|| is the size of the joint distribution's domain. For the this model, 7T. Pr(Qt |Y 0 : t ) = P r ( Q t | Y 0 ; t - i ) P r ( Y t | X t ) P r ( Y t | Y 0 : t - i ) (4.10) (4.11) Q|| = | | J | | - | | X | | - ( | | I | | - | | A | | ) P - 2 P ^ I I + 1 > + 1 (4.12) 48 Figure 4.3: A depiction of the secondary structure concerning joint intentions, con-tained in the model of Figure 4.1, for a particular sequence. Effectively, while a joint intention persists only a single node conditions T t and I t 1 : p . Similar secondary structure exists for individual intentions. 4.5.2 Backward Inference The other required inference is the computation of the backward distributions Pr (Y t + i : x |Q t ) , which combine with the product of forward inference to perform the second inference task: Pr(Qt |Y 0 : r ) ^-7^ r^r—, (4.13) Pf( I t+l:T\Y0:ij Given an observation sequence, these distributions can be computed with the following iterative computation: 1. The first step is the prediction of the backward distribution for the last time step of the observed sequence: P r ( Y r | Q T _ ! ) = ]•••] P r ( Q r l Q r - i ) P r ( Y T | X r ) (4.14) Q T . Given this base, the agent can then recurse back through sequence using Equa-tion 4.15 to compute the remaining backward distributions (i.e., Vt £ [0, T-2]). 49 P r ( Y t + i : T | Q t ) = f... f P r ( Q m | Q t ) P r ( Y t + 1 | X i + 1 ) x (4.15) Q . + i P r ( Y t + 2 : T | Q i + 1 ) Backward inference has space complexity of 0(||Q|| xT), and time complexity of 0( | |Q| | 2 x T), in addition to the computation of the forward distributions. If a joint distribution over more than one time step is desired, the inferrer can simply skip the marginalization in Equation 4.15. p , n n | V ^ Pr(QtlY 0 : t ) P r ( Q t + 1 | Q t ) P r ( Y t + 1 | Q t + 1 ) P r ( Y t + 2 : T | Q w Pr(t4t, \4t+l\ I0 :T j = Pr(Y t + i : r |Yo: t ) (4.16) 4.5.3 Approximate Inference Closed-form Bayesian inference in model of Figure 4.1 can be intractable, depending on the size of each variable's domain and the number of modelled agents. Therefore, a situated agent requires an approximate algorithm that can exploit local structures and achieve computational practicality, while retaining sufficient accuracy under uncertainty. In general, hybrid inference algorithms are developed by identifying a minimal marginalization of the model that allows exact inference. If a suitable marginalization can be found, then an algorithm can be developed that performs a stochastic search in a subset of conditions, which is assumed to be sufficiently representative of all possible worlds [DdFMROO]. Such a marginalization can be devised by exploring the chain c / ^ ^ . i ^ A ^ x a The chain itself has nice inference properties both within each time step (where every node is conditioned upon no more than one other random variable) and between time steps (where transitional probability functions are sparse for most probable situations). Also, the chain d-separates the evidence node from the remainder of the joint distribution (\&t = Q t - C t ); in other words, \&t is independent of the observations given the chain as shown in Figure 4.4. This segregation can be exploited by the inference algorithm. Because of the separation of ^ t from observations, the posterior step in forward inference need only be computed for the chain C t : P r ( C t | Y ° : t ) ~ • PrCY.lYo,-!) ( 4 1 7 ) 50 Figure 4.4: The first three slices of the model in Figure 4.1 — where the modelled variables have been separated into two sets (C t = {J t , l]'p, A t 1 : p , Xt} and tyt =f {T t , S}*B}*}) . Likewise, the prediction step becomes: P r ( C t + 1 | Y 0 : t ) = J - . . J Pr(Q|Y 0 : t ) / . . . / P r ( ¥ t | C t ) P r ( C t + 1 | Q t ) (4.18) This computation is no less complex than before the separation of Q t into C f and ^ f , but Equation 4.18 does present a promising opportunity for a stochastic search approximation. Instead of considering its entire domain, only select possibil-ities of the value of \&t can be used as an approximation of the entire space. If these select possibilities are chosen appropriately (for example, by the processes described in Section the approximation is likely to produce reasonable inferences. Specifically, the agent can consider possible values of *l>t producing the approximation in Equation 4.19. Using this approximation, forward inference has space complexity 0( | |C| |T + N^T) and time complexity 0(| |C| | x x T), where | |C| | = | |J| | • (||I|| • | |A | | ) P • | |X | | — a significant reduction (by a factor of x 2-P(H0ll+1)~1) as long as sufficient accuracy can be achieved with a conservative number of samples. P r ( C t + 1 | Y 0 : t ; i V V ) ) = j ... JPr(C4|Y0: ]TPr(*t = ^ ( i ) | C 4 ) P r ( C i + 1 | C t , * t = # ) (4.19) i=0 Despite this improvement, the size of the domain of Ct is still troublesome for one reason: Xt is a continuous random variable in a large space. Therefore, the 51 agent must also sample possible world states. If the agent considers Nx possible world states, and C7X =f Ct — {Xt}, then this can be done using Equation 4.20. Filtering Pr(Ct+1\Y0:t; N,p, Nx) requires 0 ( ( | |C - X | | + Nx + N^) x T) space and (9( | |C- X | | x Nx x Afy x T) time where | | C - X | | = | |J| | • (||I|| • | |A | | )P . Representing approximate distributions Monte Carlo approximations are characterized by sampled location-weight pairs; for example, in the space of * and X . In the RSL domain there are also informative bounds on portions of the domain that have non-zero probability. For example, the bounds on probable portions of an observed object's position is given by the maxi-mum errors in relative range and bearing in Equation 5.10. Such bounds on the true distribution are also maintained in the representation of the model's distributions; and are used to distinguish between unlikely and impossible locations and to guide sampling (helping to overcome the problem of lost, improbable hypotheses). If these bounds ever become convex, then the forward inference procedure will retain that property. Because convex bounds can be maintained efficiently,2 the perception bounds are represented as the minimal trapezoid that covers the true bounds, as shown in Figure 4.5. With this approximation, the bounds of any filtered distribution will also be convex and will always over-estimate the true bound. In general, the reduced confidence caused by the over-estimate is outweighed by the benefits of the representation. The minimal amount of over-approximation means that it is extremely unlikely that a randomly selected location will lie outside the true distribution. 2 I f n and m are the number of vertices in two polygons, assuming convexity reduces intersection complexity from 0{nm) to 0(n + m) [Sha78] and sample testing from 0(n) to ^ P r ( * t = # ' ) | C t - X , X , 3=0 P r ( C t + i | C t - x , X t = x ; ° , * t = (4.20) 0(log 2 n) . 52 Figure 4.5: Observation distributions take the shape of a ring sector due to am-biguity in both range and bearing (bounded by the solid line). The ring sector is represented using a minimal trapezoid that covers the true area (bounded by the dashed line). The shaded area represents portions of the representation not included in the true bounds (but, note that the curvature of the arcs have been exaggerated in this figure). 4 . 6 Parameter Estimation A probabilistic model can be described with three components: a list of modelled random variables (for example, the semi-infinite set Qt) a factorization of their joint probability distribution (which can be captured within a graphical representation like that in Figure 4.1) and conditional probability functions characterizing the relationships implied in the graphical representation (described and parameterized in the previous sections). The intentional model described in this chapter has the following parameters which describe the conditional probability functions, or secondary structure, of the model: © = (7r,J,Ij,A,,/?,Sj,T) (4.21) • The prior, joint distribution over the modelled variables, 7r =f Pr(Qo); • The mapping J, representable by a (||J|| — 1) x 2P^ matrix, that parameterizes the group's commitment to particular joint intentions; • The | |J | | mappings lj, each representable by a (||I|| — 1) x 2^ 11 matrix, that parameterizes each agent's commitment to an individual intention; 53 An approximate to the filtered distribution of agent positions, Pr(X"|Yn :t). The dark shaded polygon represents all possible po-sitions. Circles represent the location (cen-tres) and posterior value (radii) of 20 sam-ples. Predicting the agent's next position (Pr(X° + 1 |X° , A£, Yn :t), shown below) is based on the above prior (whose bounds are shown in dashed lines) and the force that is applied to the agent. Samples diverge and re-normalize in accordance to the dynamics' stochasticity. The predicted distribution (dashed) is then validated through observation modelled by the perception probabil-ity distribution (shaded). To determine the new filtered dis-tribution, Pr(X t 1 + 1 |Yo:t+i), the pre-dicted and perception bounds are in-tersected and sample weights are mul-tiplied by the perception probability at their location. The weights of four delta functions go to zero, and are re-sampled. Figure 4.6: An example of Bayesian filtering the world state. The prediction and update steps of the Bayesian filter are outlined using this representation in Fig-ure 4.6, although, remember that no visual stimulus is perceived by the agent every third cycle. 54 • The ||I|| mappings A*, each representable as a | |A| | x 2^1 matrix, that param-eterizes each agent's action policy; • The vector /?, that parameterizes the correlation between the observed agent's perceptual acuity and the observers; • The ||I|| mappings Si, representable by a ||H|| x 2^H matrix, that parameterize the conditions under which individual intentions are satisfied; and • The mapping T, representable by a | |J| | x matrix, that parameterizes the conditions under which group intentions are satisfied. Specifying these parameters accurately is extremely difficult to do, so it is preferable to learn them from observations of the actual system. For example, the agent can fit the model parameters to percept traces of particular agents that it has stored from previous experience. The remainder of this section explores this concept, and discusses the Expectation Maximization algorithm (introduced in Sec-tion 2.2.2) as a method of parameter fitting. The development starts with the familiar temporal segregation of the E M iteration equation into prior and transi-tional components: 9k+l = argmax J'... J Pr(QQ:Ti\Y0:Ti = Vo-.T^dk) x Q0:TV Ti log Pr(Q 0 ; 9) + £ log Pr (Q, |Q t _ i ; 9) + £ log Pr (Y t |Q t) (4.22) 4=1 t=0 The motivation is that the set of parameters that maximize Equation 4.22 can be found by considering the maximization of each summed term independently. For example, maximization of the prior parameter, ir = Pr(Qo; 9), can be performed by considering the first term only, as is done in Equation 4.23. 7 r f c + 1 = argmax J ••• j Pr(Qo|Y 0 : r i = 2/0:7* A ) log?r + c(Q t) (4.23) Qo = argmax f(ir) (4.24) where c is constant with respect to n. The appropriate parameter can be found using the Lagrangian multiplier technique of minimization. Making use of the fact that, for example, the prior is constrained by the axioms of probability, a Lagrangian constraint can be defined as 55 (p'ir) = j ... J 7 T ( ( ? ) dq - 1 = 0 (4.25) Qo then, the value of 7r that maximizes f — Xip will also maximize / ; and will serve as the next iterate's prior parameter. C V T L J 7T=7Tfc + 1 (4.26) 7 T f c + i = Pr(Q 0|Y0:T i;6 )fc) The remaining 'transitional' terms of Equation 4.22 can be decomposed so that each term contains only one parameter to maximize. This is done for the intentional model in Equation 4.27. Ti Ti . £ > g P r ( Q t | Q t _ i ; 0 ) - E logPKJtlJt- i .BjiP.Tt-i jJf) + (4.27) *=i ^ l o g P r ( X t | A i 1 : p , X t _ 1 ) + logPr(Y 4 |X i ) + logPr(Tt |J t ,B t 1 : p ;T) + E logPr(I? |J t , I«_ l J B?_ 1 ,S?_ 1 ; I) + ?=i logPr(A?|I?,B?_i;A) + logPr(B t 9 |X t ;^) + logPr(S?|I?,B?,T t;S) Each term in the sum is associated with a single parameter. After Lagrangian maximization with respect to each parameter, the following parameter update equa-tions are obtained. T - l E Pr(Jt+i, T t = T , B t 1 : p = 6 i : p | Y 0 : T ) q T - l ^ P r ( T t = T , B t 1 : p = &i: P |Yo:T) J(bl:P) <— i = ^ I ( 4 - 2 8 ) t=0 T - l p E E P*(!t+v J*+i = h S? = T , B? = b\Yo:T) W - (4.29) E E P r ( J *+i = 3, S? = T, B? = b|Y 0 : r ) t=0 q=l 56 X ; E P r ( A ? + 1 , I ? + 1 = i , B ? = b|Yo ; r ) Ai(B) <— ^ ^ E T ^ ( 4 ' 3 ° ) X ; z ^ P r ( I ? = i ,B? = 6 |Y 0 ; r ) 4 = 0 g = l ^ «~ E E P r ( B * ( 0 = = * I Y O = T ) (4-31) 4=0 g = l T p 53 53Pr(S? = T, II = i,B\ = b, Tt = r\Y0:T) Si(5, r) — t = ° q = ^ p (4.32) ^ ^ P r ( I ? = i ,BJ = 6,T t = r|Yo:r) 4=0 g = l T J > r ( T t = T,3t = j,B1t'P = &1:P|YO:T) Tj(oi:P) — —— : (4.33) J ^ P r ( J t = j , B t 1 : p = 6 i : p |Y 0 : r ) t=0 Each of these computations is discussed in Section 4.5. The resulting E M algorithm for this model is then 1. to choose initial parameters,3 and then 2. to repeatedly update the parameters (using each sequence in the data set with Equations 4.26, and 4.28-4.33) until the parameters have converged. 4.7 How to Use the Model The model presented in this chapter can be used by any observer who wishes to infer, from a trace of percepts, the past, present, or future properties of the environment or of other, observed agents. For example, the model can be used by a security system, video annotator, or an agent interacting with the observed agents. Depending on the nature of its use, different inference algorithms, including varying approximations, 3 The most critical consideration of the E M algorithm is the initial parameters upon which to operate. In complicated environments, the likelihood surface is inherently complex with many local extrema. This means that a hill climbing algorithm like E M is vulnerable to learning globally sub-optimal parameters. 57 may be used. These algorithms posses different computational requirements and provide different levels of accuracy. However,' the foremost concern with regards to accuracy is the model itself: whether the assumptions of intentionality within the observed agents is valid, and whether the model parameters themselves are applicable to the observed agents. Parameterization provides an infinite number of models describing different groups of intentional agents. The correct parameters can be guessed at, or be computed based on experience in observing the agents of interest, as described in this chapter. The model can then be used to infer probabilistic distributions over a group of agent's actions, beliefs, individual intentions, and joint intentions. These inferred distributions, which represent the observer's beliefs about the state of the world, can be used in any number of ways from a historic log of events that can be used to learn global properties of the system, to direct influences of the observer's action. As previously stated, these applications are not fully explored in this thesis and are left for future work. In the next chapter, however, the model will be applied to an RSL agent where these distributions constitute an internal representation of the external world. Chapter 5 will also detail how particular elements of the representation might be used as decision influences, but stops short of incorporating the entire internal representation including the assumed intentional properties of observed agents. 58 Chapter 5 Appl i ca t i on to the R o b o C u p Simulat ion League Chapter 3 and 4 have introduced, in general terms, a novel agent architecture and a probabilistic model of a multiple-agent environment, respectively. However, to understand these general concepts further, and to explore the properties of their operation, the theories must be implemented and tested. This chapter will detail the development of an agent for the RoboCup Simulation League (RSL) domain that is designed using the constraint-based architecture of Chapter 3 with the model of Chapter 4 providing internal interpretations of its percepts. The RSL system, shown partially in Figure 5.1, is composed of multiple, distinct software programs that interact through a lossy communication channel: the Agent programs that implement the agent controllers for 22 agents, and the SoccerServer program that implements the simulated environment including the ball and referee. The SoccerServer was described in Section 2.4. The communi-cation channel can be modelled as a CN module which introduces an exponentially distributed transmission delay that is independent of time [Bol93]. Specifically, for each input event trace, e(t), a transduction produces the event trace e(t — A), where A is a random variable distributed according to Equation 5.1. Although an RSL agent has no physical body, there is still a notion of sensors and actuators in a virtual body connecting a controller to the environment. Further, the simulated environment is demanding and adequately models failures and noise in sensors and actuators. Therefore, all of the agency issues discussed in the preceding chapters apply. The following sections will detail each aspect of the agent. (5.1) 59 Agent Controller command^ ) A ) percepts Communication Channel action^ ) A ) stimuli action event I V / (V_/stimuli events SoccerServer Figure 5.1: A Constraint Nets model of the RoboCup Simulation League system with a single agent. 5.1 Adaptive Resource Scheduling Any agent-environment system can be characterized by a time-dependent (usually monotonically increasing) utility function that describes the trade off between delib-eration cost and decision quality. In the RSL, there is no deliberation cost until the SoccerServer simulates the current time step, where the deliberation cost increases to some fixed positive value. Successful situated agents must use this function to trade off decision quality with cost [Sim82, RW91]. The Schedule module, whose internal structure is shown in Figure 5.2, is responsible for providing prompt, reliable action in a dynamic and adapting environ-ment. The scheduler is a meta-reasoner who decides how long to deliberate before commanding an action based on the Dynamic Action Index algorithm [MM02]. The Dynamic Action Index algorithm uses an interval timer, I n te rna l Clock, as an in-ternal representation of the SoccerServer ' s cycle and actively tries to synchronize with it. In general, the internal representation will be temporally offset from the SoccerServer cycle by some amount, r, and communication between agent and SoccerServer will be delayed by some amount, A. Therefore the optimal time to act relative to the internal clock is £*—T —A, where t* is the optimal external time to command action (immediately before the SoccerServer simulates the environment's dynamics). The algorithm schedules action commands at this optimal internal time, which is estimated using separate estimates of r and t*, which are assumed to encompass the communication transit time: 60 see percept Update t* belief —-H^}-confirmed f\ f \ * actions V_y old t* estimate command 8(0) Confirm Pending Actions O sense_body percept old offset estimate o 8(0) Estimate offset ) see ' event ) sense_body ' event action index Internal Clock o o time step command event offset estimate Figure 5.2: The Schedule Constraint Net module. t* - T - A w t* - f (5.2) 5.1.1 Es t ima t ing the Internal Cyc le Offset The internal cycle offset is estimated in the Est imate Of fset module by modelling noise in proprioception times. Although the SoccerServer specification states that proprioception occurs at the beginning of every cycle, actual perception times fluc-tuate due to varying network reliability and SoccerServer timer errors. As seen in Figure 5.3, these fluctuations can be quite severe, making it difficult to infer the SoccerServer cycle from perception times alone. As opposed to external synchronization (which assumes no transmission de-lay) our model is based on the conservative assumption that a quick transmission is likely to occur within a small number of simulation cycles [Cri89, Bol93]. If we assume that at least one out of N messages will be delivered promptly, then a good (i) estimate of the cycle offset is given by Equation 5.3, where iprop is the internal perception time of the ith previous proprioception. N-l f = min (J { tg o p } (5.3) i='0 Although this estimate is necessarily an over-estimate (because it includes 61 the transit time of the quickest message), the bias can be neglected by considering the necessary communication delays while estimating the optimal time to command action. Further, the estimator is robust to intermittent communication delays, as shown in Figure 5.3. Sustained delays are not characteristic of network noise [Bol93]. 140 135 "g 130 O O CD •<H 125 g 120 CD a a) 115 o >. o £ CD £ 105 o 95 O b s e r v e d c y c l e p e r i o d i c i t y Observed cycle period — Estimated cycle period 4M : 10 20 30 40 Elapsed time (seconds) 60 Figure 5.3: Typical variation in proprioception times (in gray) recorded by measur-ing agent sense_body event frequency during a game. Also shown is the resulting estimate of the internal cycle offset, in bold. 5.1.2 Estimating the Optimal Time to Command Action Because both network delays and processor strain vary over time, so too does the optimal time to command action. Therefore, it is beneficial to adapt the estimate of the optimal command time during operation. This is performed by the Update t* b e l i e f module. Although the optimal time can not be directly observed, the validity of the current estimate can be inferred from action successes and failures. The SoccerServer effects action based on the external time of the action request (according to the rules in Table 2.1); this principle can be reversed to infer the external time from action successes and failures, providing a coarse validity observation of the current esti-mate, as is done in Table 5.1. 62 Table 5.1: Inferring the validity of the optimal command time estimate assuming known cycle offset. Because inference operates on the timescale of cycles (where communication delay can be deemed negligible) and does not depend on proprio-ception, its reliance upon network responsiveness is minimal. Action Confirmation Inference Effected on time t* < t* Effected late t* > t* Action failed t* « t* Once the validity of the current estimate is inferred, the estimate can be modified accordingly. However, there are still two important issues: 1. The inference obtained from any individual action detection can be adversely affected by spurious communication delays. This side effect is avoided by maintaining a probabilistic distirubtion over inference conclusions, based on a stochastic model of spurious communication. 2. Once a conclusion is reached, how much the estimate should be changed is unknown. For example, if the estimate is less than optimal then it can be increased (thereby increasing time for sensing and deliberation) while still commanding action on time. However, because the optimal time is not ob-servable, there is no immediate indication whether a particular increase moves toward the optimal or past it. As seen in Figure 5.4, appropriate parameterization of these solutions can produce desirable results. 5.1.3 Confirming Commanded Action The validity of the optimal command time estimate is inferred from the outcome of commanded action, which is determined in the Confirm Pending Act ions module. When an action is commanded, the module adds it to a chronological list of pending action commands, P. When the agent can confirm that the action was effected or has failed, the command descriptor is moved to a list of confirmed action commands, C. If the agent assumes that action commands are received by the SoccerServer in 63 1001 . 1 1 1 r Elasped time (seconds) Figure 5.4: Adapting the estimate of the optimal time to act during operation using the parameters used by the TJBCDynamo02 team during competition. The faint line represents the optimal time to command action, and the darker ones represent dynamic estimates with different prior values. the same order that they were sent,1 then confirmation follows from a specific causal relationship between time and the count of effected actions, ne, which is provided as part of the proprioceptive stimulus. The ith pending action is confirmed if the count of effected actions is not less than the number of action commands preceding it: > 11^11 + i — > P[i\ confirmed (5.4) Once an action command has been confirmed, its status (whether it was effected on time, late, or not at all) is determined. If a single cycle elapses before confirming a command, then that action was effected on time. However, if more than one cycle elapses, it can be difficult to distinguish between late and failed actions. The distinction can only be made by imposing a bound on communication delay, A m a x , above which unaccounted action requests are assumed lost. 1This assumption is not guaranteed by the UDP/IP transport layer, but it is likely since subsequent action commands are sent in approximately 100 millisecond intervals. 64 Table 5.2: Inferring the status of the i pending action command from the count of effected actions. Effected action count Cycles elapsed Action status ne> i 1 Effected on time ne > i [2,1 + 2 r%rl] Effected late ne < i [l + 2 p W ] , oo) Failed However, ambiguities arise if a new action is commanded before a previous action has been confirmed. This often occurs while waiting to distinguish between late and failed action and, in such circumstances, all hypotheses must be maintained until the ambiguity can be resolved or can be proved to be undecidable.2 The basis of disambiguation are the strict rules that govern how an action command is handled by SoccerServer when it is received (see Table 2.1). In par-ticular, the agent can exploit the fact that an action cannot be effected on time immediately after a late action to derive the disambiguation rules in Equations 5.5 to 5.7. P[i] effected on time —• Vj < i (P[j] effected on time) (5.5) P\i] effected late V ne = \\C U P\\ —• Vj > i (P[j] effected late) (5.6) i = ||P|| —> Vj < z (P[j] undecidable) (5.7) IICUPII of which are lost 5.2 Interpreting Observations Within the agent architecture described in Chapter 3, the F i l t e r module imple-ments a Bayesian filter based on a particular, probabilistic model of the environment which, for the RSL agent, will be the intentional model introduced in Chapter 4. The model, combined with F i l t e r , uses percepts as evidence to infer distributions over a collection of random variables (including intentional characteristics of observed 2Ambiguities are only undecidable when a failed action cannot be assigned to a specific pending command. This does not matter for the purpose of t* estimation because failure is certain, but this does effect model prediction and activity analysis since the agent does not know which action failed. 65 agents) that constitute the observer's internal representation of its surroundings. The complete model is specified by a number of domain-dependent parameters: 1. The contents of the world state vector. 2. A prior probability distribution over the modelled random variables. 3. A stochastic model of perception. 4. A stochastic model of state transition (i.e., the environment dynamics). 5. The intentional extension of the model is domain-independent and fully spec-ified in Chapter 4. However, there are situation-dependent parameters: the number of observed agents, individual intentions, and joint intentions consid-ered. The first four points of this specification are given for the RSL domain in the following sections starting with Section 5.2.1 which suggests initial distributions of world states. Given an initial description of the world state, the effect of time (in Section 5.2.2) and observations (in Section 5.2.3) on the world state are then specified. The situation-dependent values, however, are parameterized using cross-validation for particular situations within this thesis. Future work may include having the agent manage these values online, for example. 5.2.1 Initial World States The first step in modelling the system is to specify its state vector — a set of variables that are able to distinguish between important situations in the world — along with probable initial states. Consider, for example, the RSL domain world state given in Section 2.3. In the RSL domain, there are a number of constraints that can be used to construct a prior probability distribution over the initial values of this world state. • The ball is exactly at the centre of the pitch. • Opponents are on the opposite half of the pitch from the agent. Although it is not likely that initial positions would be distributed uniformly over this domain, it is safest to assume so given no information about the opponent team. 66 • Teammates are on the same half of the pitch as the agent. In this case, however, it is more likely to assume that the agent knows the initial configuration of the team. • Al l objects are stationary at their initial locations. 5.2.2 W o r l d State Transitions The next step in modelling the system is to specify its dynamics: how the world state vector changes through both action and inaction. In the RSL, all objects independently adhere to the same parameterized dynamic equations described in Section 2.4.1. These equations can be generalized into the stochastic state transition model below. P r ( X ° + 1 , V t ° + 1 | Y 0 ; t ) = j . . . J Pr (X?,V° |Y 0 : t )Pr (X t o + 1 |X t o ,V t ° )x x° v° P r ( F ° + 1 | X ? + 1 , Xt°, V t°, Y 0 : t ) Pr(V t°+11V t°, F? + 1 ) (5.1 J J r t+i Equation 5.8 is only non-zero if the equations of dynamics (Equations 2.17 and 2.18) are obeyed. When they are, the transition can be described using Equa-tion 5.9. Pi(X0t+1^xt+uV°+l=vt+1\YQ..t) = / Pr(X° = xt+1 - Vt+*~™f,V? =  Vt+y™f\Y0;t) x JF°+1 1 - A4 1 - M P r ( F ° + 1 = / | X ? + 1 = xt+uX°t = xt+1 - VJ±l^MyVO = vJ±l^AL)Yo:t)df (5.9) However, specifying the impulse model, P r ( F ° + 1 | X ° + 1 , X°, V°, Yn :t) is diffi-cult as most forces are exogenous. Any model falls between two extreme strategies: 1. Optimistically consider any action that possibly affects the object. This strat-egy is guaranteed to maintain consistent beliefs, though confidence can quickly decay. 2. Pessimistically consider no action effect on the object. This strategy can result in inconsistent beliefs when the object is acted on so a consistency check is 67 needed to discard inconsistent hypotheses. However, when the assumption holds this strategy maintains higher confidence and accuracy. The ideal model follows some intermediate strategy. For example, the agent should be more discriminatory about its own actions (which can be confirmed using the Schedule module) and develop beliefs about likely exogenous actions. Such a model of exogenous actions is developed in Chapter 4. 5.2.3 W o r l d State Percept ion Another step in modelling the system is to specify the effect of observations on the believed distribution over world states. When an observation is made, it is considered using a stochastic model of perception based on Equations 2.19-2.21. Because the ceiling operator compresses information, the nature of this mapping is many to one: each point in percept space maps to a region in the world. If (as elaborated in Section 7.1) the ceiling operator is modelled as a stochastic function such that \a] — a is a random variable uniformly distributed in the range [0,1), then percept interpretation is a stochastic function from percepts into distributions over world states. Specifically, the probability of an object being at a particular location x after being observed with percept yx from location xa is only non-zero in the region defined by Equation 5.10, when it is defined by Equation 5.11. If (im-^2 <\\*-*a\\< (IIS*ll + 28j) c f A then • f 2 20||j7||+l 1 1800 a m i n l e 2 ' 20ig-a"ii f Pr(X° = x\Xa = xa,Yx" = y) = ^H\n ± 2 °" X | | a " \ (5.11) V ' ^ [ p - \ 20 |y | | - l 1 v > \ e ' 20 | |z-z a | | J 7T max • The above equation specifies the uncertainties involved with observing an object. Perhaps the most significant uncertainty is introduced by the observer's own unknown position, since position and velocity observations are relative, uncertainty in its own state compounds the observer's perceptual ambiguity. For this reason, self-localization becomes an important special case of perception, and will be described in the following section. Finally, the critical issue of identification, which has been largely overlooked, is addressed in Section 68 Self-localization Self-localization is the procedure of determining the agent's own position, and is a critical factor in reconstructing beliefs from agent-centred observations. In the RSL, an agent localizes itself by observing landmarks (see Section 2.3.3). Although each landmark can be filtered individually, pose recovery using multiple landmarks is a characteristically different problem. Figure 5.5 shows the individual distribution bounds for each observed landmark for a particular agent pose, but each observation is not independent because they are all observations of a single object: the world relative to the agent (or equivalently, the agent relative to the world). Figure 5.5: A collection of landmarks perceived in a single time step. Black disks represent each landmark's true location, xl, squares represent each landmark's per-ceived location, yx , and shaded regions represent non-zero portions of the posteriors, P r ( X z | Y s ' , X a ) . Probabilistic self-localization from multiple landmarks is the determination of the posterior distribution over agent poses, given the observations of a number of landmarks, and their known location. For L landmarks, P r ( X a | Y * 1 L , X i i : ' M = P r ( X a | X ' i : ' M TT ^  1 ' . , ' (5.12) Given the specific model of perception for each landmark (Equation 5.11), the agent can rely on the fact that each landmark percept only depends on the relative 69 position of the landmark with respect to the agent. In other words, Yx 1 J _ Y X 3 | X a , X'* for every landmark lj other than / j . This independence simplifies localization: P r ( X a | Y P r ( Y *i f c | X a , X ^ ) , X ( i : ( i ) = P r ( X a | X ( l \ pT(Yx k\Ys n (5.13) Essentially, the probability of the agent being that location xa is equal to the probability of observing each landmark at its true location from xa. This is computed for all prior possibilities to recover the distribution of agent positions. Although localization from a single landmark can be fairly uninformative, utilization of multiple observations in this manner makes the agent significantly more confident in its beliefs, as shown in Figure 5.6. - Intersection Figure 5.6: An example of using multiple observations to reduce perception ambigu-ity. The polygons represent the extent of a probabilistic distribution over possibili-ties obtained from a single percept (the distribution's value is known, Equation 5.11, but not shown). Considering all landmarks, only a small portion of the domain re-mains with non-zero probability. Object identification Often, an observation can not be attributed to any single object unequivocally. In the RSL domain, for example, identification reliability degrades with distance from the observer. However, correct association of percepts to particular objects is necessary for accurate modelling in this system. Under ambiguity, hypothesis 70 postulation is based on the noise model governing perception. For the RSL, there are three relevant features of perception: 1. each object can be observed at most once per cycle, 2. each observation is of one object, and 3. an observation constrains the relative location of the object in accordance with Equation 5.10. These features can be exploited, engendering the rules for generating hy-potheses listed in Table 5.3 and used in the example in Figure 5.7. As more obser-vations are made, the probability of each hypothesis is re-evaluated and hypotheses that become sufficiently improbable are pruned. In practise, very few ambiguities are perpetuated for long periods of time. Table 5.3: Rules used for labelling observations when identification is ambiguous. i f / P r (X° = x\Xa, Yx = y) P r (X° = x) dx = 0 then Do not match object O with observation y else i f V Q ^ O ( f Pr(X® = x\X.a, YxQ = y) Pr(XP — x) dx = 0) then Match object O with observation y else Consider object O as a possible match with observation y 5.3 Deliberation The highest level of deliberation is planning: the consideration, evocation, and revision of the observer's own intentions. Based on the intentional stance view of agency, this process can be modelled as a sequential decision process from believed random variables to intentions. Specifically, the observer's policy maps specific states of its internal representation directly into an intention. The policy can address each element of the internal representation's cross product separately, or map larger classes into single intentions. Several intentions were designed for the RSL domain and are available for the agent to choose from, as listed below. However, a rather 71 MA A )D* Figure 5.7: Possible observation associations that can be made when object identi-fication is ambiguous. The left figure shows prior bounds (shaded polygons) of five object locations along with position bounds (unshaded polygons) obtained from five separate observations. The centre and right figures illustrate hypothetical posterior bounds obtained from all possible associations made using the rules of Table 5.3. simple policy was used (see Figure 5.8) that only makes use of the world state vector. Though intentional characteristics are being inferred, their use in the observer's policy is left for future work. Nolntention The agent has no commitment to pursue any particular goal. ScanForlnformation The agent commits to gathering information about the world. The commitment is revised when information of interest is acquired, or it is determined impossible to acquire that information (for example, by a change in the status of the game). K i c k l n B a l l The agent commits to performing the free kick awarded after an op-ponent foul. The intention is abandoned after the period allowed for a free kick has elapsed. KicklnBal lSupport The agent commits to supporting the player who is perform-ing a free kick. The intention is abandoned when the free kick is completed. RunOnToBal l The agent commits to intercepting the ball. The intention is aban-doned when the agent no longer believes that it can intercept the ball. In general, each intention corresponds to a list of constraint conjunctions that collectively specify the intent. For example, the sections from 5.3.1 to 5.3.11 describe a number of constraints that were tested and used to compose general soccer behaviour. They are summarized in Table 5.5. Therefore, intention satisfaction takes the form of satisfying constraints. The agent's constraint solver is implemented by the Means-ends reasoning module, which performs gradient ascent in the space of actions (described in Ta-ble 5.3) to find the action that would most likely achieve a world state that satisfies 72 game_mode in preparation_modes OR ball not seen in Is ScanForlnformation game_mode = my_goal_kick (game_mode = my_free_kick AND ball is caught) KicklnBall OR (game_mode not in restricted_modes AND ball is interceptable) RunOnToBall Nolntention Figure 5.8: An example of the decision process used by the goalie for the UBCDy-namo02 team in competition [Mon02]. In this case, the agents beliefs were mapped to logical predicates and the decision process can be represented as a decision tree. Solid lines and dashed lines indicate truth and false conclusions respectively. the current constraint. In the case of constraint conjunctions, constraints are han-dled by strict priority. In summary, deliberation within the Planner module commits to an intention which is described by a list of constraint conjunctions. Means-ends reasoning then produces a sequence of actions that moves the system into a solution state, satisfying the intention. 5.3.1 The Scan Constraint The Scan constraint is designed to maximize the amount of information obtained from the agent's current position. In this case, new information is defined as elements of the domain that the agent cannot see; the amount of new information obtained through action is calculated using Equation 5.14. 73 Table 5.4: Description of RSL actions used by the Means-ends reasoning mod-ule to approach constraint solutions. pmax, m m a x , and kmax are all SoccerServer parameters indicating maximum agent power, moment, and kicking power respec-tively [CFH+02]. Action Expected result Restrictions dash p turn m turnJiead m kick p PZ9? yt+l - °t -r I ^ j j ^ l ffi + m abtocp P € [0, Pmax] m € [ T i m a x , m.m a x] m € [ - " C « , m^ a x] ||p|| € [0,pm a x] r\\/lp-9?\ < kn Table 5.5: Summary of constraints. Refer to the noted section for more detail. Constraint Name Section Relations to satisfy Scan 5.3.1 Maximize new information with constant xa. LookAround 5.3.2 Maximize new information with constant xa,Qa. LookAt 5.3.3 Maximize new information with constant xa, 8a and object O within the field of view. BeforeKickoff 5.3.4 Maximize new information with constant xa,8a and xa equal to the player's initial position. GoTo 5.3.5 xa = desired location. InterceptBall 5.3.6 | |x a — xb\\ < kickable distance. PlayerPosition 5.3.7 Attain free kick position, then acquire maximum information (prioritizing ball information). GoaliePosition 5.3.8 TiT« = T%T?, see Section 5.3.8. DefensivePosition 5.3.9 MiddlePosition 5.3.10 Mark or evade in the middle of the field. OffensivePosition 5.3.11 Mark or evade at the team's offside line. 74 New information oc + ^ ^ + 1 - 0 * 7T 2 else. + .€ field of view (5.14) The Means-ends reasoning module attempts to maximize this information by rotating its head or body (modifying 6a and </>a respectively). However, be-cause the sensor-limited agent cannot view the entire environment, the constraint will never be solved. It is useful, nonetheless, because the Means-ends reasoning module will produce a sequence of useful actions in an attempt to solve it. 5.3.2 The LookAround Constraint The LookAround constraint is designed to maximize the amount of information obtained from the agent's current position and heading. Specifically, it is the con-junction of the Scan constraint (Section 5.3.1) with Equation 5.15, the latter of which has priority. 5.3.3 The LookAt Constraint The LookAt constraint is designed to maximize the amount of information obtained from the agent's current position and heading, while maintaining a particular object, O, within its field of view. Specifically, it is the conjunction of the LookAround constraint (Section 5.3.2) with Equation 5.16, the latter of which has priority. 5.3.4 The BeforeKickoff Constraint The BeforeKickoff constraint is designed to position the agent and initialize its memory. The constraint combines Scan (Section 5.3.1) to maximize the amount of position on the pitch, XQ. Positioning is prioritized over information retrieval. (5.15) Z(x° - xa) - 9a - <f)a € field of view (5.16) information in the initial memory with Equation 5.17 to attain the player's initial (5.17) 75 5.3.5 The GoTo Constraint The GoTo constraint is designed to move the agent to a desired position. The Constraint Optimization module attempts to achieve this constraint by applying appropriate forces to the agent's body in accordance with the equations of dynamics (Equations 2.17 and 2.18). It can calculate the force required to move the agent to the desired position, but there are a number of restrictions upon what force the agent can actually effect, all of which are crucial to optimal motion design. 1. There is a maximum bound on the force magnitude, which depends on the agent's current stamina level. 2. There is a maximum bound on the velocity that the agent is able to achieve. 3. The agent can only accelerate in the direction of its current heading. 4. The agent has a limited reserve of stamina, which is depleted by each applied force, and replenished through inaction. If the desired force is initially insurmountable then its effect must be applied over time. For example, if the third constraint is violated, the agent must turn itself to the desired direction before accelerating. Since turning takes time, the agent must project its position into the future to determine the force that will take the agent to the desired position. This, of course, presents further complications, including a limit on the amount the agent can turn depending on its velocity, as well as sensor and actuator noise that may inhibit the agent from effecting actions exactly as desired. To decompose the desired force over an extended operation, the agent must consider its momentum and project its position and velocity into the future. Based on a constant acceleration strategy, this projection is performed using Equations 5.18, and 5.19 which refer to all mobile objects. Unfortunately, there is no analytic solution for both the length of the tra-jectory, T, and desired applied force, / . The agent must heuristically search the trajectory space for solutions, seeking the quickest overall trajectory that yields a desired force that is valid with respect to the maximum acceleration, maximum speed, and available stamina. For example, see Figure 5.9. T - l (5.18) (5.19) 76 Figure 5.9: Agent trajectories produced with the GoTo constraint. The agent starts at the circle with an initial speed of 1.2m/s in direction indicated by the solid arrow with the desired location indicated by the square 10 meters away. The agent arrives at the desired location in 1.0, 1.0, 1.1,1.1, 1.2, 1.3, and 1.3 seconds respectively from right to left. 5.3.6 The InterceptBall Constraint The InterceptBall constraint is designed to move the agent to within a kickable distance of the ball. The constraint is satisfied similarly to the GoTo constraint where the desired position is the ball's location when the trajectory completes. The ball's position can be projected into the future using Equation 5.18 (assuming no exogenous actions) and the agent can choose the quickest trajectory satisfying Equation 5.20 to optimally intercept the ball. T - l - x\ + E ( 1 - PY (V? - $ + ~r"fa) || < kickable distance (5.20) i=0 The assumption that no force is applied to the ball is important, and it is critical for an agent to judge how likely this is. The agent can calculate an lower bound on the amount of time available to intercept the ball by considering how long it will take each other player to intercept the ball optimally3. An agent should only invoke the InterceptBall constraint if it is likely that it will intercept the ball. 5.3.7 The PlayerPosition Constraint The PlayerPosition constraint is designed to produce fundamental soccer-player be-haviours; tasks that a player in any role might perform. These responsibilities are 3Players can accelerate with a SoccerServer-specified maximum acceleration for a finite period of time that is governed by stamina. When a player's stamina is sufficiently depleted, its maximum acceleration is dramatically reduced and its ability to replenish stamina is permanently restricted. For these reasons, very few teams allow their agent's to accelerate beyond this critical stamina threshold. Although it would be beneficial to model and infer other agent's stamina, The UBCDynamo02 agent conservatively assumes that every agent has full stamina. 77 specifically in belief maintenance and positioning during certain set plays (for ex-ample, during a free kick). More complicated roles can then be constructed that make use of this default behaviour (see Sections 5.3.8 to 5.3.11). PlayerPosition is the concatenation of the following constraints, in order of their priority: 1. GoTo, with the following desired position: • If the agent is performing a kick-in, the desired position is directly behind the ball (facing the pitch) and within kickable distance from the ball. • If the agent is performing a free kick, the desired position is directly behind the ball (facing the opponent's goal) and within kickable distance from the ball. • If the agent is supporting a kick-in or free kick, the desired position is the nearest position at which the passed ball will likely be received. This is determined by knowing how the teammate will kick the ball, and searching for an appropriate initial position based on Equation 5.20. 2. Equation 5.16 with O = ball. In other words, to maintain accurate beliefs in the state of the world (maximizing viewable area over time) with a priority on ball information (by keeping the ball within agent's field of view) from the agent's current position. 5.3.8 The GoaliePosition Constraint The GoaliePosition constraint is designed to place a player in a position capable of preventing the ball from entering its own goal. The basic strategy is to only move parallel to the goal line at a particular distance away from its goal. GoaliePosition is the concatenation of the following constraints, in order of their priority: 1. PlayerPosition 2. To face the ball: (ba = Z{xb - xa) - Ba (5.21) 3. GoTo, where the agent's desired position is such that it can reach a point on any, scorable ball trajectory before the ball does. To reduce the search space, the agent only considers ball interception at the closest trajectory location to its desired location. Under this restriction, interception of scorable ball trajectories is guaranteed by the constraint in Equation 5.22, where T° is the 78 time it will take object o to reach point x, and the points I and r are the two maximal interception points as illustrated in Figure 5.10. Tf > Tf A Tb > T r a (5.22) Ball 9r I 0 -- -6 .. X ^Goali^ ) % a Goal A? 91 Figure 5.10: Illustration of the desired position used by the GoaliePosition con-straint. In the worst case, the ball may be kicked from its current position with its maximum possible speed, ||wb||max, along the left-most or right-most trajec-tory. The agent assumes that it will accelerate with maximum force toward the appropriate interception location, and that it will not be limited by its maximum velocity or lack of stamina. This assumption is likely because the goalie is relatively inaccurate and the interception point is relatively close (on the order of the width of the goal). Given these situations, Equation 5.18 can be used to determine the duration of the ball's and the agent's trajectory: l n f l - TT^fr— l | S - x 6 | | ) rpb _ L \\v Umax J h i HI* - * i = ( i - ( i - tf*) ni + fc - I - 1 - = - ± (I+a - u.fs-i) A4 aa (5.24) Unfortunately, the constraint cannot be solved with a closed-form equation: the agent must search the space of possible desired locations. The agent heuris-tically searches along one of the maximal ball trajectories. Given an intercep-tion point along this trajectory, the agent's desired location is constrained to 7!) the point's — only a small segment of which will satisfy both the first and second conjugates of the constraint. In general, there may be a range of solu-tions for any interception point. The agent prefers the mean of the satisfying set to alleviate errors in perception and those caused by the invalidity of the above assumptions. 5.3.9 The DefensivePosition Constraint The DefensivePosition constraint is designed to hamper a player's progress toward the agent's goal. This is done by physically impeding the player's trajectory and by maintaining aggressive offside positioning. DefensivePosition is the concatenation of the following constraints, in order of their priority: 1. PlayerPosition 2. GoTo where the agent's desired position is the weighted vector sum of two attractions (xa + ui\a\ + 11)2(12) • The first attraction is toward the ball. a i — xt+i ~ xt (5.25) The second attraction depends on the agent's mark (the closest opponent with this agent being the closest teammate). If the mark is within some threshold of the agent (e.g., 10 metres) then the second attraction is 2 metres away from the mark, between the opponent and the ball. Otherwise, the second attraction is to one of a pencil of lines emanating from a point 20 meters directly behind the agent's own goal, parameterized by the agent's defencive role 7. 2xb - XP Q2 "0 0 ' ' 0 " xa + 7 -1 20y_ if player p is the agent's mark, else. (5.26) 5.3.10 The MiddlePosition Constraint The MiddlePosition constraint is designed to facilitate the movement of the ball through the middle of the pitch toward the opponents goal, and to disrupt its move-ment toward the agent's goal. MiddlePosition is the concatenation of the following constraints, in order of their priority: 80 Figure 5.11: Illustration of the attractive forces (dashed vectors) applied to the agent's position by the DefensivePosition constraint along with their combined effect (solid vectors). 1. PlayerPosition 2. GoTo where the agent's desired position depends on which team has possession of the ball. • If the opponent team has possession of the ball, the agent's desired po-sition is the marking position of its mark (or, if it has no mark, of the closest unmarked opponent). • A middle player, by definition, plays in the area between the teammate closest and furthest from its own goal line. The role of a middle player is parameterized with an off-centre distance, y, and a breadth percentage, x. If the agent's team has possession of the ball, then these parameters define its base desired position as y meters tangent to its goal line, and x percent across this allowed area (see Figure 5.12). Appended to this base desired position are a set of repulsive forces, weighted inversely by the opponents distance from the agent, that push the agent away from each opponent, but only tangent to its goal line. 5.3.11 The OffensivePosition Constraint The OffensivePosition constraint is designed to move the agent at the most oppor-tune location to receive the ball and move it to the goal. OffensivePosition is similar to MiddlePosition except that its base desired position places it on the team's offside 81 Q, O o 0 Opponenl Closest unmarked opponenl Figure 5.12: Illustration of the constraint on an agent's position provided by Mid-dlePosition when its teammates have possession of the ball (left) and when the opponents have possession of the ball (right). line4 as shown in Figure 5.13. Teammate 'W ^ ;nt y Opponent.-"'' Teammate Opj* W _ j ^ - , Base desired position flent Figure 5.13: Illustration of the constraint on an agent's position provided by Offen-sivePosition when its teammates have possession of the ball. 5.4 The RoboCup Simulation League Agent The agent described in this chapter was implemented using C++, and tested on a Linux-based computer with SoccerServer version 8.5. Although specific elements of the agent are tested in the next chapter, the agent in general was able to produce rational soccer behaviours and competed respectably, placing 26 t h with 42 teams from around the world in the 2002 Robot World Cup competition. 4 The offside line is a line parallel to the goal line, that intersects the opponent player or ball that is furthest up-field. 82 Chapter 6 Exper imenta l Results For The R S L Agent This chapter presents a series of controlled experiments designed to test the reac-tivity of the RSL agent described in Chapter 5, along with its ability to infer an accurate internal representation of its environment. These capabilities are funda-mentally important to an agent and are each shown to be proficiently performed through the results of the orthogonal experiments. The first experiment, which is discussed in Section 6.1, was performed to test the agent's reactivity by exploring the adaptive deliberation scheduling algorithm described in Section 5.1. The ability to thus synchronize with the environment is crucial as it forms the foundation upon which the rest of the architecture is built. Indeed, synchronization during realistic scenarios is validation of the sufficient reactivity provided by the architecture as a whole. The remaining experiments were performed to test the agent's ability to infer an appropriate internal representation. First tested, as discussed in Section 6.2, is the lowest hidden layer of the intentional model (i.e., the position and velocity of all 23 objects). The accuracy of these inferences necessary as the observer's control is known to rely heavily its believed world state. Furthermore, as discussed in Section 6.2.4, experiments were performed to test the agent's ability to infer high-level characteristics about its environment such as the intentions of other agents (using the full model of Chapter 4). Although this thesis only assumes that the ability to infer abstract behavioural descriptions can play an important role in the agent's higher-level planning and strategy adoption, the results qualitatively suggest that appropriate high-level characteristics can be inferred from the environment. It is left for future work to fully incorporate these inferences into the agent's decision process and obtain quantitative confirmation of the assumption. 83 6.1 A d a p t i v e Resource Schedul ing To test the agent's reactivity, the SYNCHRONIZATION experiment involves teams of agents (each using a different synchronization algorithm) competing in a soccer game. During the game, the inferred status1 (whether an action was effected late, on time, or not at all) is recorded for each action requested. Over several games, this data begins to reveal important characteristics of the algorithm, shown in Figure 6.1. Observed Action Correctness • Correct actions | • Late actions • Lost actions S 6 0 % -< Internal External Synchronisation algorithms Dynamic Index Figure 6.1: Distributions of correct, late, and failed actions observed by agents using the internal, external, and dynamic action index synchronization algorithms during the SYNCHRONIZATION experiment. Each synchronization algorithm was tested us-ing a network that introduced varying transmission delays averaging approximately 15 milliseconds. The high variance observed using internal synchronization is significant. It corresponds to the uniform distribution of possible cycle offsets: when the agent starts its cycle is not correlated at all with the SoccerServer's cycle. If the internal cycle happens to be synchronized with the external cycle the agent is able to perform a high fraction of correct action. However, poor synchronization is just as likely. External synchronization was observed to produce fewer correct actions than the other algorithms on average. For external synchronization, variance from the average is a result of sensitivity to prompt proprioception: small changes in commu-l rThe online procedure described in Section 5.1.3 automatically provides expressive in-formation concerning the agent's synchronization - an ability which is a special feature of the online algorithm that combines the intended action time with its inferred status. 84 nication delay can cause a large change in the fraction of actions that are correct. Also, in a general setting it is unlikely that an external percept will actually signal the appropriate action time. The dynamic action index synchronization algorithm produces the most cor-rect actions on average. Also, the observed variance is low because the evolving estimates continually converge on optimal values making the algorithm robust to varying conditions. 6.2 Interpreting Observations This section describes experiments that test the agent's ability to infer the value of a collection of random variables (described by the model in Chapter 4) based on observations of its environment. During this process, the agent evolves a filtering distribution based on observations as they arrive. The purposes of these experiments are to determine the accuracy of the model developed in this thesis by comparing this distribution to ground truth (where available) and expected results. The accuracy of the random variables will be determined in a bottom-up order. The first two experiments test the agent's ability to localize itself from a single visual percept (a single-observation kidnapped robot problem described in Section 6.2.1) and also during navigation (described in Section 6.2.2). These abilities are important since other world state dimensions are perceived through agent-centric sensors. The next experiment, presented in Section 6.2.3, tests the agent's ability to perceive other objects moving independently throughout the environment. By tracking each of the 23 mobile objects in the environment, the agent can reconstruct the full world state vector. The final set of experiments (Section 6.2.4) test the inference of the observed agents' intentional attributes in a specific situation. This constitutes the whole of the observer's internal representation. 6.2.1 Static Self—Localization De Boer et al. have published results concerning the accuracy of self-localization in the RSL domain where they compare errors in agent's believed position after localizing from a single observation [dBK02]. To compare with these results, their experiments are reproduced: the STATIC SELF-LOCALIZATION experiment involves repeatedly placing the agent in a random pose and allowing it to localize from one visual observation. The agent uses the same observation to localize using each of two filters: 85 1. An exact Bayesian filter using the "Two-landmark Uniform" distribution de-scribed in Section 2.3.3. Note that this is not identical to the approximate filter used in [dBK02], but should produce better results. 2. A sampled, approximate Bayesian filter using the distribution based on a prob-abilistic interpretation of the ceiling operator, described in Section After incorporating the observation, the mean of each distribution is com-pared with the agent's true pose to determine an absolute error. The results of several repetitions of this experiment are shown in Figure 6.2 and Table 6.1. E r r o r s in S e l f - L o c a l i s a t i o n • " Weighted average of all two-landmark estimates Two-landmark uniform distribution Estimate error (metres) Figure 6.2: Localization errors obtained from multiple iterations of the STATIC SELF-LOCALIZATION experiment. The first filter produces significantly better results than the two-landmark estimate alone by making use of the known bounds on possible locations to focus the two-landmark estimate onto more probable locations. The second filter is similar, except that it considers all possible locations, and does not treat these locations uniformly but weights them according to the perception model in Equation 5.11. This produces notably better results, suggesting that the probabilistic interpretation of the ceiling operator produces a distribution that is a more accurate model of perception in the RSL. 86 Table 6.1: Comparison of errors obtained from different filters used in the STATIC SELF-LOCALIZATION experiment. Filter type Error Mean Std. dev. Minimum Maximum Two-landmark uniform 9.45 cm 9.69 cm 0.320 mm 2.11 m Probabilistic ceil. 5.91 cm 5.26 cm 0.222 mm 1.74 m However, there is an important difference between these two filters: the first is exact; the second is approximate. The approximate filter involves more computation than the exact, uniform filter, and the computation time grows linearly with the number of delta functions used to approximate the distribution. It is important to consider how many delta functions are necessary to accurately approximate the distribution. During the STATIC SELF-LOCALIZATION experiment, the mean of the approximate filter's distribution was recorded based on different number of delta functions (multiples of 100), and the error in the resulting estimate recorded. The results are shown in Figure 6.3. The accuracy of the distribution mean improves steadily as more delta func-tions are used. Results, of course, will vary depending on the random placement of those delta functions. Although this trend is clear, it is important to note that the difference in mean error when using 100 delta functions is less than a millime-tre worse, on average, than using 5000 delta functions — and still better than the "Two-landmark Uniform" filter. 6.2.2 Dynamic Sel f -Local iza t ion The results in Section 6.2.1 convey the benefit of the self-localization sensor de-scribed in Section However, much of the power of Bayesian filtering comes in dynamic situations, where knowledge about the environment dynamics can be combined with observations over time. The DYNAMIC SELF-LOCALIZATION ex-periment involves repeatedly generating a random pose and allowing the agent to achieve that pose using the GoTo constraint (described in Section 5.3.5). The agent localizes itself using both internal beliefs as well as visual observations obtained during its trajectory, filtering five of the state's dimensions (corresponding to the agent's position, orientation, and velocity) using the second Bayesian filter. The mean of the resulting beliefs are compared with the true world state, resulting in 87 A p p r o x i m a t i o n E r r o r 0.0597 CD 0.0596 IE E "^0.0595 g CD 0> 0.0594 « E To 0.0593 < X 1 , I < X X 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Number of delta functions used in approximation Figure 6.3: Expected localization errors caused by poor approximation of the true perception distribution. Note that the entire error range is less than one centimetre. the errors displayed in Figure 6.4 and in Table 6.2. The dramatic improvements illustrate the benefit of filtering observations: the environment dynamics provide stringent constraints that restrict possibilities. Table 6.2: Comparison of errors obtained from observations alone and from filtered observations during the DYNAMIC SELF-LOCALIZATION experiment. Estimate Mean Error Std. dev. Minimum Maximum Direct position observation 6.11 cm 4.03 cm 1.00 mm 32.3 cm Filtered position observations 3.48 cm 2.93 cm 0.295 mm 28.1 cm Direct velocity observation 3.22 cm/s 1.74 cm/s 0.250 mm/s 11.8 cm/s Filtered velocity observations 1.92 cm/s 2.67 cm/s 0.200 mm/s 16.6 cm/s 88 Figure 6.4: Localization errors obtained from multiple iterations of the DYNAMIC SELF-LOCALIZATION experiment, showing the benefit of filtering observations. The state of the agent, position (left) and velocity (right), is displayed separately depict-ing the belief mean (solid) and the percepts (dashed). 6.2.3 Object Tracking The state also contains the position and velocity of the ball and 21 other play-ers, each obeying the same parameterized dynamics and sensor models. The OB-JECT TRACKING experiment instantiates an object with a random state (within the agent's field of view), and allows the SoccerServer to effect the object until it comes to rest. During this interval, the agent remains stationary and filters its observations. When the object stops, a random impulse is applied to it and the process repeats. Throughout the experiment, the agent's beliefs are compared to the true object state in Figure 6.5 and Table 6.3. Table 6.3: Comparison of errors obtained from observations alone and from filtered observations during the OBJECT TRACKING experiment. Estimate Error Mean Std. dev. Minimum Maximum Direct position observation 41.2 cm 31.9 cm 2.20 mm 2.31 m Filtered position observations 30.6 cm 30.8 cm 0.40 mm 2.88 m Direct velocity observation 56.8 cm/s 1.23 m/s 0.30 mm/s 62.5 m/s Filtered velocity observations 4.66 cm/s 17.9 cm/s 0.00 mm/s 8.94 m/s 89 Figure 6.5: Results of the OBJECT TRACKING experiment showing errors in believed object position (left) and velocity (right) separately. The belief mean (solid) is compared to the percepts (dashed) showing the benefit of filtering observations. Although the filtered position belief is more likely to produce low errors, the significant improvement observed in other dimensions of the world state is not observed. This is because simulation is sufficiently noisy that it contributes little information to the posterior, compared to the percept itself. An object's veloc-ity, however, is subject to less stochastic dynamics and the filter is able to tightly constrain the possibilities. 6.2.4 Modelling Multiple Agents This section describes experiments involving the agent's ability to filter its obser-vations with respect to the full probabilistic model described in Chapter 4. The experiments' domain is a subset of robot soccer: the M - O N - N ATTACK . During this scenario, M teammates, who have possession of the ball, attempt to score on their opponent's goal, which is defended by N opponents and the opponent goalie. In a sequence, each player and the ball are placed randomly on the pitch and allowed to act until the ball leaves the pitch, scores, or is intercepted by a defender, or when a foul is committed. Other restricted domains have been proposed for the purposes of machine learning [SS02], but this scenario represents a more valid deconstruction of general soccer play, with natural means of scaling the problem [SAB+02]. The analysis begins by considering a model designed by a human with an intuitive understanding of the domain. This model, called Model-HC, was designed based on a small base of rules that describe typical 2-ON-O ATTACK behaviour, in the eyes of the designer: 90 1. All group behaviours can be classified using two joint intentions: intend_to_pass where the ball carrier draws its opponent with the ball before passing to an open teammate. intend_to_shoot where the ball carrier singularly attacks the goal. These constitute the group's joint intentions formed arbitrarily but where intend_to_pass is considered four times more likely than intend_to_shoot. 2. Al l individual behaviours can be classified using three individual intentions: intend_to_intercept where the ball carrier tries to intercept the ball. A player with this intention is expected to only move and turn, and to do so until the ball has been intercepted. intend_to_draw where the ball carrier lures its opponent away from a team-mate or the goal. A player with this intention is expected to dribble the ball until a good pass presents itself, at which time a pass is made. intend_to_score where the ball carrier singularly attacks the goal. A player with this intention is expected to dribble toward the goal until a viable shot presents itself, at which time a shot is made. This model is fairly simple and largely deterministic. Nonetheless, it can characterize the 2-ON-O ATTACK scenario with some level of accuracy. Optimizing the Model Parameters Despite the intuition that the model designer may hold, several factors detract from the hand-crafted model. For example, • there are often variables that are actually hidden, i.e., their values or dynamics are not known and cannot be estimated; • human memory and predictions are often unnecessarily general, leading to inaccurately estimated probabilities; and • the space of behaviours is too complex to accurately characterize in a reason-able amount of time. For these reasons, although educated guesses may suffice, it is often beneficial to incorporate observed experiences in a more formal way. For example, the 2-ON-0 ATTACK scenario was performed 900 times, producing 900 observation traces of 91 different lengths. This experience can be used to estimate optimal parameters for the probabilistic model, as described in Section 4.6. To test this procedure, different subsets of the evidence are used to estimate the parameters of the model. The sequences left out of this training phase are then used to evaluate the learned model. This is repeated numerous times with different training and test sets. The purpose of performing these numerous experiments is to obtain a more accurate characterization of the learning algorithm and the learnable models, by effectively increasing the size of the data set. The steps of a single experiment are as follows: 1. Randomly divide the full data set of 900 sequences into mutually exclusive training and a test sets. The test set in this case was selected to be 10% of the size of the training set. 2. The sequences in the training set are then used to estimate model parameters that most likely describe the observed behaviour using the E M algorithm de-scribed in Section 4.6. For example, Figure 6.6 shows how Model-HC converges to a locally optimal parameter set (named Model-HC*) that demonstrates a 62440 magnitude improvement in likelihood. 3. The sequences in the omitted test set are then used to compare inferences based on the learned model. This validation step verifies the learned model's robustness, because the test sequence instances are not present in the training data. Initial conditions can have a tremendous effect on the parameters that are estimated by this technique. In order not to limit the model testing to the region of parameter space closest (in a hill-climbing sense) to singular initial conditions, a random search through the parameter space was conducted. In this experiment, initial model parameters are selected at random (uniformly from the model space) and fit to the data set using the above procedure. Although the space of all model parameters is extremely large, such a search can be an effective strategy for finding a more successful local extrema. The experiment was run with 113 random restarts. The local optimal pa-rameter set from each of those experiments is displayed in Figure 6.7. The model with the highest likelihood — named Model-Best - is a 411 magnitude improvement over Model-HC*. 92 Convergence of Training Data Likelihood Convergence of Testing Data Likelihood EM Iteration (k) EM Iteration (k) Figure 6.6: Convergence of the training (left) and test set (right) likelihoods while optimizing Model-HC parameters with the E M algorithm. Training likelihood char-acterizes how well the model can learn from observations while testing likelihood characterizes how well the learned model correlates with the generating model. The two cannot be meaningfully compared (as the low training likelihood is mostly the result of a larger number of member sequences) but can be used to compare different models as in Section Likelihood of Random, Optimized Models Figure 6.7: The likelihood of 113 models whose randomly selected initial parameters were optimized using the E M algorithm. 93 Inference Properties of the Intentional M o d e l To test the inference algorithms, the defending goalie in the M -ON -N ATTACK sce-nario is given the probabilistic world model to use as an interpreter of its obser-vations. This allows the goalie to recover likely states of the world, including the hypothesized intentions and joint intentions of the attackers as well as predictions about future states. The two most important characteristics of this inference are its complexity and its accuracy. The first concern is whether the agent can perform the filtering within the resource constraints of its dynamic task. To test this proposition, a set of obser-vations can be considered concurrently by several different filtering algorithms, and their execution times compared. The different algorithms correspond to the exact inference described in Section 4.5.1 in comparison with several parameterizations of the approximate inference described in Section 4.5.3 (i.e., with different values of Njj,). The results are presented in Figure 6.8. Inference Complexity 0 50 tOO ISO 200 250 300 350 400 450 500 Number of particles (N ) Figure 6.8: The time complexity of exact and approximate inference algorithms using the Model-Best model of the 2-ON-O ATTACK scenario. Because the filtering computation is performed after each percept is re-ceived, it is clear that only the approximate algorithms with less than approximately Nip = 250 samples have sufficiently quick inference to incorporate into the agent's deliberation process.2 This conclusion, however, is greatly dependent upon the model parameters. If the model is such that individual or joint intentions terminate frequently, then the inference complexity increases. 2Although the agent operates at approximately 10 Hertz, it must act less than 50 mil-liseconds after sensation in some cases, due to restricted visual perception. 94 The second concern is the accuracy of the inferred values. Discussing infer-ence accuracy is meaningless without some measure of ground truth and, although there can generally be no ground truth concerning the hypothesized characteristics of the model (for example, the mental attitudes of the observed agents) the model as a whole can be evaluated on two conceptual levels. 1. Evaluation based on the model's prediction of concrete and verifiable notions such as agent actions, elements of the world state, and percepts. 2. Correspondence, particularly with concern to the hypothesized variables, with the designer's intuitive expectations. The first of the two notions was primarily discussed in Section 6.2. The inference and — more importantly — the prediction of correct actions and intentions is part of the extended, intentional model and will be considered in this section. For the inferred actions of observed agents, ground truth is obtained by monitoring the SoccerServer program during the scenario to recover a sequence of agent actions, A^, corresponding to the processed observations. These actions, thereafter considered true, are compared to the predictions provided by the model. For example, a confusion matrix represents the expected probability of correct and incorrect predictions. Specifically, each element of the matrix, c^, represents the probability of predicting action j for some time r > t, when the agent then performs action i at time r 3 . Two such matrices are shown in Tables 6.4, and 6.5, both for T = t + 1 (i.e., a one time-step prediction). Perhaps more important is the inference of variables whose value has longer-term significance, like individual and group intentions. Such inferences are likely to be more useful to an observer's planning process. However, there is no rigor-ous method of testing these inferences since such variables are hypothesized and are generally hidden even to the experimenter. Nonetheless, the inferences (shown in Figure 6.9) can be analyzed in a hypothetical, qualitative sense. An example sequence was observed by the defending goalie, which made the inferences in Fig-ure 6.9. The important key frames are described in Figure 6.10. 3Under the assumption that inferences across time are independent of each other. 95 Table 6.4: The action confusion matrix produced by Model-HC making inferences from a test set of sequences. For the Move, Shoot, and Dribble actions, the most probable prediction is observed on average; but when an agent performs the Pass action, the observer is expecting it to dribble on average; and when an agent per-forms the None action, the observer is expecting movement on average. In other words, this model does not capture the rule that induces an agent to pass the ball, as opposed to dribble it. I O -a fe Predicted action Move S b o o t P a s s Dribble None Move ' 0.4268 0.0062 0.0081 0.2880 0.2709 " Shoot 0.0000 0.6479 0.0374 0.1791 0.1355 Pass 0.0000 0.2638 0.2253 0.4835 0.0274 Dribble 0.0000 0.0611 0.0389 0.8920 0.0080 None 0.5673 0.0009 0.0040 0.0413 0.3865 Table 6.5: The action confusion matrix produced by the Model-Best model making inferences from a test set of sequences, which is similar to that of the Model-HC*. On the whole, the optimized model's are able to predict actions with greater accuracy (as evidenced by the higher likelihood) but more importantly, the correct prediction of individual actions has been greatly improved in all cases to the point that the most probable prediction is always actually effected by the agent under observation. The exception is the None action, which conforms to the statement if it is considered a Move of zero distance. 0.2528 0.0000 0.0325 0.0662 0.4554 Predicted action Move Shoot Pass D r i b b i c Move ' 0.6338 0.0002 0 0051 0.1081 Shoot 0.0079 0.7698 0 0815 0.1408 Pass 0.1091 0.0005 0 5832 0.2747 Dribble 0.3286 0.0124 0 0087 0.5840 None 0.5225 0.0007 0 0025 0.0189 96 Figure 6.9: The evolution of joint and individual intentions throughout an example sequence, described in Section For the joint intention (top) the solid line represents intend_to_pass and the dashed line represents intend_to_score; For the individual intention (middle for member 1 and bottom for member 2) the solid line represents intend_to_intercept, the dashed line represents intend_to_score, and the gray, dash-dotted line represents intend_to_draw. 97 Initially, the ball is at the centre of the pitch and both attackers are on their half. The goalie is not sure whether their joint intention is intend_to_pass or intend_to_score, and as-sumes that attacker 1 will try to score (i.e., has the intention intend_to_score) while at-tacker 2 will try to draw (i.e., has the intention intend_to_draw). Time step 1 However, attacker 2 proceeds toward the ball so the goalie's beliefs are updated so that attacker 2 is believed to intend_to_intercept while at-tacker 1 is believed to intend_to_drau. After attacker 2 has intercepted the ball and as the attackers approach, the goalie becomes in-creasingly certain that attacker 1 has the indi-vidual intention intend_to_draw while attacker e l 2 has the individual intention intend_to_score. Time step 7 However, these inferences do not effect group's joint intention. At t = 71, attacker 1 begins to behave as if it is attempting to score rather than draw the goalie (based on its positioning), even though attacker 2 still has the ball. Time step 71 Figure 6.10: Notable moments from a particular 2-ON-0 A T T A C K example described in detail. 98 At t = 80, attacker 2 passes the ball to attacker 1. This action initiates significant change in the goalie's beliefs. Attacker 1 intends to in-tercept the ball, however the goalies belief in intend_to_score remains high signalling that attacker 1 will likely attempt to score after intercepting the ball. And after passing the ball, attacker 2 becomes less likely to have the intend_to_score intention, and more likely to be attempting to draw the goalie away from attacker 1. With attacker 1 in scoring posi-tion, and attacker 2 in drawing position, the goalie infers that their joint intention is more likely intend_to_pass than intend_to_score, although attacker 1 does go on to shoot the ball. Time step 80 At t = 90, attacker 1 shoots the ball. Although the goalie believes that it is likely that the group is attempting to pass before shooting, it also be-lieves that it it most likely that attacker 1 is intending to score itself. Time step 90 Figure 6.10: Continued. 99 Chapter 7 Conclusions This thesis considers a situated observer in a multiple-agent environment. Although the observer may act and may have other goals, an emphasis was placed on its ability to interpret and understand its surroundings, including the actions of other agents, and construct an appropriate internal representation. Despite this emphasis, the observer is foremost an agent - an actor - with competitive requirements of rational decisions and reactive behaviour. This thesis introduces an adaptive deliberation scheduling algorithm that dynamically optimizes the balance of reactivity and proactivity. The scheduling algorithm was implemented and applied in a team of agents, situated within the RoboCup Simulation League domain. Theoretical and empirical analysis of the scheduling algorithm has shown that the algorithm generalizes previous approaches such as internal and external synchronization; relaxing their fundamental assumptions with improved, adaptive estimates. Further, it outperforms these approaches in varying conditions, exhibiting the desired robustness to a changing environment. Practically, agents using the new algorithm can choose a degree of reaction while making good decisions, approaching the rational ideal. Based on this significant success, the adaptive deliberation scheduling al-gorithm was used as the foundation of a constraint-based agent architecture. The architecture controls perception, modelling, deliberation, means-ends reasoning, and action in a asynchronous and interruptible manner. When the scheduler determines an action should be performed, the architecture always has one available. Also, the more resources that are committed, the more appropriate the resulting action will be. The architecture was used to construct a team of eleven soccer-playing agents within the RSL domain, that competed successfully in the Robot World Cup competition proving that proactive behaviour could be attained without sacrificing responsiveness. Within the controller processes of perception, modelling, deliberation, and 100 means-ends reasoning, particular emphasis was placed in perception and modelling. The final significant contribution of this thesis is the development and characteri-zation of a probabilistic model of general multiple-agent behaviours, based on the theory of intentionality. The model was used to understand behaviours in a sub-set of the R S L domain: the 2-ON-O ATTACK scenario. As a result the thesis develops a novel probabilistic model of the R S L agent's perception and dynamics. When used by the agents to interpret observations through Bayesian filtering, the model is shown to produce more accurate inferences of the world state vector than previous techniques. Experiments also explore the inference of the hypothesized intentional-ity of observed evidence. The full model's parameters can be learned from observed evidence; and the learned model is shown to provide interesting and intuitive pre-dictions concerning the actions and intentions of the observed agents. 7.1 Avenues of Future Work The probabilistic models of the R S L agent sensors proved very accurate and suffi-cient to provide robust behaviours in the R S L domain. However, the inferences are approximate and do not utilize all possible worlds. A promising avenue of future work is to explore other approximations to the complex models that may be more accurate. This may involve a more appropriate sampling method, or parametric approximations of the distributions themselves (e.g., piecewise-linear models). Further, although the ideas introduced in this thesis were developed in a general context, they were only implemented and tested in the R S L domain (and in some cases a sub-set of that domain). Agents in the R S L domain interact in a variety of ways in support of both individual and coordinated activities, but in general tend to be fairly reactive. An important area of future work is to confirm the findings of this thesis with a larger variety of agents and domains. For example, to see how the adaptive deliberation scheduler performs when the environment's dynamics are less structured, or how the characteristics of the behavioural model change when the processes of intentional deliberation are more (or less) applicable to the domain. Finally, this thesis has tested the ability of the behavioural model to predict the actions of a group of agents. The underlying assumption is that this prediction will aid deliberation and, ultimately, successful action. However, the link between these inferences and the deliberation process was not rigorously explored. For exam-ple, the decision process shown in Figure 7.1 formally represents how these inferred values may impact the commitment to an agent's own intentions. These decision processes can be learned, in conjunction with the model, to also optimize the ac-101 tion taken at each time step. It is only with such rigorous interpretation that the intentional model's usefulness to the observer can be concretely confirmed. Figure 7.1: A partially observable Markov decision process model of the UBCDy-namo agent's planning and intention management process. The model, not fully detailed for clarity, is based on the intentional representation of Chapter 4. Rect-angles represent random variables that the agent can directly effect (decisions) and diamonds represent the value or utility of resulting world states. 102 Bibl iography [AC87] P. Agre and D. Chapman. Pengi: An implementation of a theory of activity. In Proceedings of the Sixth National Conference on Artificial Intelligence (AAAI-87), pages 286-272, Seattle, WA, 1987. [AK77] H. Akashi and H. Kumamoto. Random sampling approach to state estimation in switching environments. Automatica, 13:429-434, 1977. [Ark98] R. Arkin. Behaviour-Based Robotics. MIT Press, May 1998. [BBS+00] H. D. Burkhard, J. Bach, K. Schrter, J. Wendler, M . Gollin, T. Meinert, and G. Sander. At humboldt 2000 (team description). In P. Stone, T. Balch, and G. Kraetzschmar, editors, RoboCup 2000: Robot Soccer. World Cup IV, pages 405-408. Springer-Verlag, Berlin, 2000. [BD94] M . Boddy and T. Dean. Decision-theoretic deliberation scheduling for problem solving in time-constrained environments. Artificial Intelli-gence, 67(2):245-286, 1994. [Bil97] Jeff Bilmes. A gentle tutorial on the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models. Technical report, University of Berkeley, 1997. [BIP98] M . E. Bratman, D. J. Isreal, and M . E. Pollack. Plans and resource-bounded practical reasoning. Computational Intelligence, 4:349-355, 1998. [Bis94] J. C. Biscarat. Almost sure convergence of a class of stochastic ap-proximation algorithms. Stochastic Process Applications, 50:83-100, 1994. [BKL+93] R. Barman, S. Kingdon, J. J. Little, A. K. Mackworth, D. K. Pai, M . Sahota, H. Wilkinson, and Y . Zhang. Dynamo: realtime experi-ments with multiple mobile robots. In Proceedings of the IEEE Intel-ligent Vehicles Conference, pages 261-266, Tokyo, Japan, July 1993. 103 [BKRK97] J. Binder, D. Koller, S. J. Russell, and K. Kanazawa. Adaptive proba-bilistic networks with hidden variables. Machine Learning, 29:213-244, 1997. [Bol93] J. C. Bolot. Characterizing end-to-end packet delay and loss in the in-ternet. Journal of High-Speed Networks, 2(3):305-323, December 1993. [BPHOO] M . Butler, M . Prokopenko, and T. Howard. Flexible synchronisation within robocup environment: a comparative analysis. In P. Stone, T. Balch, and G. Kraetzschmar, editors, RoboCup 2000: Robot Soccer. World Cup IV, pages 119-128. Springer-Verlag, Berlin, 2000. [Bra84] V. Braitenberg. Vehicles: Experiments in Synthetic Psychology. MIT Press, Cambridge, MA, 1984. [Bra87] M . E. Bratman. Intentions, Plans and Practical Reason. Harvard University Press, 1987. [Bra92] M . E. Bratman. Shared cooperative activity. Philosophical Review, 101(2):327-341, 1992. [Bro86] R. A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, 1:14-23, 1986. [Bro91] R. A. Brooks. Intelligence without representation. Artificial Intelli-gence, 47:139-160, 1991. [BSF88] Y. Bar-Shalom and T. E. Fortmann. Tracking and Data Association. Academic Press, 1988. [BVW00] H. H. Bui, S. Venkatesh, and G. West. On the recognition of abstract markov policies. In Proceedings of the Seventeenth National Conference on Artificial Intelligence (AAAI-2000), Austin, TX, August 2000. [BVW01] H. H. Bui, S. Venkatesh, and G. West. Tracking and surveillance in wide area spatial environments using abstract hidden markov models. In-ternational Journal of Pattern Recognition and Artificial Intelligence, 15(1):177-196, 2001. [CDLS99] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probabilistic Networks and Expert Systems. Springer-Verlag, 1999. 104 [CFH+02] M . Chen, E. Foroughi, F. Heintz, Z. X. Huang, S. Kapetanakis, K. Kos-tiadis, J. Kummeneje, I. Noda, 0. Obst, P. Riley, T. Steffens, Y . Wang, and X. Yin. RoboCup Soccer Server: for Soccer Server Version 7.07 and later, July 2002. [CG93] E. Charniak and R. Goldman. A bayesian model of plan recognition. Artificial Intelligence, 64:53-79, 1993. [CH96] I. J. Cox and S. L. Hingorani. An efficient implementation of reid's multiple hypothesis tracking algorithm and its evaluation for the pu-pose of visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18:138-150, February 1996. [CL90] P. R. Cohen and H. J. Levesque. Intention is choice with commitment. Artificial Intelligence, 42(2-3):213-261, 1990. [Coo90] G. F. Cooper. Computational complexity of probabilistic inference using bayesian belief networks. Artificial Intelligence, 42:393-405, 1990. [Cri89] F. Cristian. Probabilistic clock synchronization. Distributed Comput-ing, (3):146-158, 1989. [dBK02] Remco de Boer and Jelle Kok. The incremental development of a syn-thetic multi-agent system: the uva trilearn 2001 robotic soccer simula-tion team. Master's thesis, University of Amsterdam, The Netherlands, February 2002. [DBS94] M . Drummond, J. Bresina, and K. Swanson. Just-in-case scheduling. In Proceedings of the 12th national conference on artificial intelligence (AAAI-94), pages 1098-1104, Seattle, WA, 1994. [DdFMROO] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-blackwellised particle filtering for dynamic bayesian networks. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000), pages 176-183, 2000. [Den87] D. C. Dennett. The Intentional Stance. MIT Press, Cambridge, MA, 1987. [DFG01] A. Doucet, N . De Freitas, and N. Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer, New York, 2001. 105 [DGH92] P. Dagum, A. Galper, and E. Horvitz. Dynamic network models for forecasting. In Proceedings of Uncertainty in Aritificial Intelligence (UAI92), pages 41-48, 1992. [DLR77] A. P. Dempster, N . M . Laird, and D. B. Rubin. Maximum likelihood from incomplete data via em algorithm. Journal of the Royal Statistical Society, B(39): 1-38, 1977. [DorOO] K. Dorer. Improved agents of the magmafreiburg2000 team. In P. Stone, T. Balch, and G. Kraetzschmar, editors, RoboCup 2000: Robot Soccer. World Cup IV, pages 417-420. Springer-Verlag, Berlin, 2000. [Dou98] A. Doucet. On sequential simulation-based methods for bayesian fil-tering. Technical Report CUED/F-INFENG/TR 310, Cambridge Uni-versity, Department of Engineering, Cambridge UK, 1998. [EHL+02] P. Elinas, J. Hoey, D. Lahey, J. D. Montgomery, D. Murray, S. Se, and J. J. Little. Waiting with jose, a vision based mobile robot. In Proceed-ings of IEEE International Conference on Robotics and Automation (ICRA), 2002. [Fer92] I. A. Ferguson. TouringMachines: An architecture for dynamic, ratio-nal, mobile agents. PhD thesis, Clare Hall, University of Cambridge, UK, November 1992. [FST98] S. Fine, Y. Singer, and N. Tishby. The hierarchical hidden markov model: Analysis and applications. Machine Learning, 32(l):41-62, 1998. [FTBD00] D. Fox, S. Thrun, W. Burgard, and F. Dellaert. Robust monte carlo localization for mobile robots. Artificial Intelligence Journal, 128:99-141, 2000. [FTBD01] D. Fox, S. Thrun, W. Burgard, and F. Dellaert. Particle filters for mobile robot localization. In A. Doucet, N. de Freitas, and N. Gordon, editors, Sequential Monte Carlo Methods in Practice. Springer, New York, 2001. [Gha98] Z. Ghahramani. Learning dynamic bayesian networks. In C. L. Giles and M . Gori, editors, Adaptive Processing of Sequences and Data Structures (Lecture Notes in Artificial Intelligence), pages 168-197. Springer-Verlag, Berlin, 1998. 106 [GL87] M . P. Georgeff and A. L. Lansky. Reactive reasoning and planning. In Proceedings of the Sixth National Conference on Artificial Intelligence (AAAI-87), pages 677-682, Seattle, WA, 1987. [Gre69] C. Green. Application of theorem proving to problem solving. In Pro-ceedings of the International Joint Conference on Artificial Intelligence, pages 219-239. Morgan Kaufmann, 1969. [GRS96] W. Gilks, S. Richardson, and'D. Spiegelhalter. MCMC Methods in Practice. CRC Press, 1996. [HoeOl] J. Hoey. Hierarchical unsupervised learning of facial expression cate-gories. In ICCV Workshop on Detection and Recognition of Events in Video, Vancouver, BC, 2001. [HP91] R. Hartley and F. Pipitone. Experiments with the subsumption ar-chitecture. In Proceedings of the IEEE Conference on Robotics and Automation (ICRA), pages 1652-1658, 1991. [HPH00] O. Hodson, C. Perkins, and V. Hardman. Skew detection and com-pensation for internet audio applications. In Proceedings of the IEEE International Conference on Multimedia and Expo, New York, NY, July 2000. [HV00] K. Han and M . Veloso. Automated robot behavior recognition ap-plied to robotic soccer. In J. Hollerbach and D. Koditschek, editors, Robotics Research: the Ninth International Symposium, pages 199-204. Springer-Verlag, London, 2000. Also in the Proceedings of IJCAI-99 Workshop on Team Behaviors and Plan Recognition. [HZ00] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge, UK, 2000. [IB98] M . Isard and A. Blake. Condensation —- conditional density propa-gation for visual tracking. International Journal of Computer Vision, 29(l):5-28, 1998. [Int99] S. S. Intille. Visual Recognition of Multi-Agent Action. PhD thesis, Massachusetts Institute of Technology, September 1999. [KA94] S.-S. Kuo and O. E. Agazzi. Keyword spotting in poorly printed docu-ments using pseudo 2-d hidden markov models. IEEE Transactions on 107 Pattern Analysis and Machine Intelligence (PAMI-94), 16(8):842-848, August 1994. [KAK+97] H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda, and E. Osawa. Robocup: The robot world cup initiative. In W. Lewis Johnson and B. Hayes-Roth, editors, Proceedings of the First International Conference on Au-tonomous Agents (Agents'97), pages 340-347, New York, 1997. A C M Press. [Kal60] R. E. Kalman. A new approach to linear filtering and prediction prob-lems. ASME Journal of Basic Engineering, 82(D) :35-45, 1960. [KBM98] D. Kortenkamp, R. P. Bonasso, and R. Murphy, editors. Artificial Intel-ligence and Mobile Robots: Case Studies of Successful Robot Systems. AAAI Press, March 1998. [KR90] L. P. Kaelbling and S. J. Rosenschein. Action and planning in embed-ded agents. In P. Maes, editor, Designing Autonomous Agents, pages 35-48. The MIT Press, 1990. [Lau95] S. L. Lauritzen. The em algorithm for graphical association mod-els with missing data. Computational Statistics and Data Analysis, 19:191-201, 1995. [LCN90] H. J. Levesque, P. R. Cohen, and J. H. Nunes. On acting together. In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), pages 94-99, Boston, MA, 1990. [LRL+97] H. J. Levesque, R. Reiter, Y. Lesperance, F. Lin, and R. B. Scherl. GOLOG: a logic programming language for dynamic domains. Journal of Logic Programming, 31(1—3) :59—83, 1997. [Mac93] A. K. Mackworth. On seeing robots. In A. Basu and X. L i , edi-tors, Computer Vision: Systems, Theory, and Applications, pages 1-13. World Scientific Press, Singapore, 1993. [McC79] J. McCarthy. Ascribing mental qualities to machines. In M . Ringle, editor, Philosophical perspectives in artificial intelligence. Humanities Press, Atlantic Highlands, NJ, 1979. [MH69] J. McCarthy and P. J. Hayes. Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, edi-tors, Machine Intelligence 4- Edinburgh University Press, 1969. 108 [MM02] J. D. Montgomery and A. K. Mackworth. Adaptive synchronisation for a robocup agent. In G. A. Kaminka, P. U. Lima, and R. Rojas, editors, RoboCup-2002: Robot Soccer World Cup VI, pages 128-143, Fukuoka, Japan, 2002. [Mon02] J. D. Montgomery. Ubcdynamo02: A team of robot soccer players. In G. A. Kaminka, P. U. Lima, and R. Rojas, editors, RoboCup-2002: Robot Soccer World Cup VI, page 552, Fukuoka, Japan, 2002. [Mor84] H. P. Moravec. Locomotion, vision and intelligence. In Brady and Paul, editors, Proceedings of the first international symposium on robotics and research, pages 215-224. MIT Press, 1984. [MP01] K. Murphy and M . Paskin. Linear time inference in hierarchical hmms. In Proceedings of Neural Information Processing Systems (NIPS-2001), 2001. [Mur02] K. Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, University of California at Berkeley, Com-puter Science Division, July 2002. [NH98] R. M . Neal and G. E. Hilton. A new view of the em algorithm that justifies incremental and other variants. In M . Jordan, editor, Learn-ing in Graphical Models, pages 355-368. Klewer Academic Publishers, 1998. [NH99] V. Nefian and M . H. Hayes. Face recognition using an embedded hmm. In IEEE International Conference Audio Video Biometric based Person Authentication, pages 19-24, March 1999. [Nil80] N . J. Nilsson. Principles of Artificial Intelligence. Morgan Kaufmann, San Mateo, CA, 1980. [Nod95] I. Noda. Soccer server: a simulator for robocup. JSAI Al-Symposium 95: special session on RoboCup, December 1995. [OPS99] R. Ostrovsky and B. Patt-Shamir. Optimal and efficient clock synchro-nization under drifting clocks. In Proceedings of ACM Symposium on PODC '99, Atlanta, GA, 1999. [Pea88] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kauf-mann, San Francisco, CA, 1988. 109 [PFH99] V. Pavlovic, B. Frey, and T. S. Huang. Variational learning in mixed-state dynamic graphical models. In Proceedings of Uncertainty in Ar-tificial Intelligence, Stockholm, Sweeden, 1999. [PMG98] D. Poole, A. Mackworth, and R. Goebel. Computational Intelligence: a logical approach. Oxford University Press, 1998. [PouOO] S. Pourazin. Pardis. In M . Veloso, E. Pagello, and H. Kitano, editors, RoboCup-99: Robot Soccer World Cup III, pages 614-617. Springer-Verlag, Berlin, 2000. [PRCM99] V. Pavlovic, J. M . Rehg, T.-J. Cham, and K. Murphy. A dynamic bayesian network approach to figure tracking using learned dynamical models. Kerkyra, Greece, 1999. [PRM01] V. Pavlovic, J. M . Rehg, and J. MacCormick. Learning switching linear models of human motion. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, pages 981-987. MIT Press, Denver, CO, 2001. [Rab89] L. R. Rabiner. A tutorial on hidden markov models and selected appli-cations in speech recognition. Proceedings of the IEEE, 77(2):257-286, February 1989. [RG91] A. S. Rao and M . P. Georgeff. Modeling rational agents within a BDI-architecture. In R. Fikes and E. Sandewall, editors, Proceedings of Knowledge Representation and Reasoning (KR&R-91), pages 473-484, San Mateo, CA, 1991. Morgan Kaufmann Publishers. [RH98] C. Rasmussen and G.D. Hager. Joint probabilistic techniques for track-ing multi-part objects. In Proceedings Computer Vision and Pattern Recognition, pages 16-21, Santa Barbra, CA, June 1998. [RN95] S. J. Russell and P. Norvig. Artificial Intelligence: a modern approach. Prentice-Hall, 1995. [RW89] S. J. Russell and E. Wefald. Principles of metareasoning. In R. J. Brachman, H. J. Levesque, and R. Reiter, editors, KR'89: Principles of Knowledge Representation and Reasoning, pages 400-411. Morgan Kaufmann, San Mateo, California, 1989. 110 [RW91] S. J. Russell and E. H. Welfald. Do the right thing: studies in limited rationality. MIT Press, Cambridge, MA, 1991. [SAB+02] P. Stone, M . Asada, M . Bowling, B. Hengst, I. Noda, M . Riedmiller, R. Sutton, and M . Veloso. The robocup special interest group on mul-tiagent learning, 2002. [Sea90] J. R. Searle. Collective intentions and actions, chapter 19. MIT Press, 1990. [Sha78] M . I. Shamos. Computational Geometry. PhD thesis, Yale University, New Haven CT, 1978. (UMI#7819047). [Sha86] R. D. Shachter. Intelligent probabilisitc inference. In J. F. Lemmer and L. N . Kanal, editors, Uncertainty in Artificial Intelligence, pages 371-382. 1986. [Sha90] R. D. Shachter. Evidence absorption and propagation through evidence reversals. In M . Henrion, R. D. Shachter, J. F. Lemmer, and L. N. Kanal, editors, Uncertainty in Artificial Intelligence, volume 5, pages 173-190. 1990. [Sim82] H. A. Simon. Models of bounded rationality, volume 2. MIT Press, Cambridge, MA, 1982. [SM94] M . Sahota and A. K. Mackworth. Can situated robots play soccer? In Proceedings of Canadian Artificial Intelligence Conference (AI-94), pages 249-254, 1994. [Smy97] P. Smyth. Clustering sequences with hidden markov models. In M . C. Mozer, M . I. Jordan, and T. Petsche, editors, Advances in Neural In-formation Processing Systems, volume 9, page 648. The MIT Press, 1997. [SPS99] R. S. Sutton, D. Precup, and S. Singh. Between mdp and semi-mdps: a framework for temporal abstraction in reinforcement learning. Arti-ficial Intelligence, 112:181-211, 1999. [SS00] B. Schappel and F. Schulz. Mainz rolling brains 2000. In P. Stone, T. Balch, and G. Kraetzschmar, editors, RoboCup 2000: Robot Soccer. World Cup IV, pages 497-500. Springer-Verlag, Berlin, 2000. I l l [SS02] P. Stone and R. S. Sutton. Keepaway soccer: a machine learning testbed. In A. Birk, S. Coradeschi, and S. Tadokoro, editors, RoboCup-2001: Robot Soccer World Cup V. Springer Verlag, Berlin, 2002. [SVR98] P. Stone, M. Veloso, and P. Riley. The cmunited-98 champion sim-ulator team (extended version). In M . Asada and H. Kitano, edi-tors, RoboCup-98: Robot Soccer World Cup II. Springer-Verlag, Berlin, 1998. [Thi95] B- Thiesson. Accelerated quantification of bayesian networks with in-complete data. In Proceedings of the First International Conference on Knowledge Discovery and Data Mining (KDD-95), pages 306-311. AAAI Press, 1995. [WJ95] M . J. Wooldridge and N. R. Jennings, editors. Intelligent Agents — Theories, Architectures, and Languages, volume 890. Spring-Verlag Lecture Notes in Artificial Intelligence, January 1995. Proceedings of ECAI94 Workshop on Agent Theories, Architectures & Languages, Amsterdam. [WT90] G. Wei and M . Tanner. A monte-carlo implementation of the em al-gorithm and the poor man's data augmentation algorithm. Journal of the American Statistical Association, 85:699-704, 1990. [ZCLC00] B. Zhang, X. Chen, G. Liu, and Q. Cai. Agent architecture: A survey of robocup-99 simulation teams. In Proceedings of the third World Conference on Intelligent Control and Automation, Hefei, P. R. China, 2000. [Zha94] Y . Zhang. A Foundation for the Design and Analysis of Robotic Sys-tems and Behaviours. PhD thesis, University of British Columbia, September 1994. [Zha98] Y. Zhang. A constraint-based approach to real-time cooperative mul-tiagent systems: A soccer-playing robot team. Master's thesis, Univer-sity of British Columbia, September 1998. [ZM95] Y. Zhang and A. K. Mackworth. Constraint nets: A semantic model for hybrid dynamic systems. Theoretical Computer Science, 138(1):211-239, 1995. Special Issue on Hybrid Systems. [ZM99] Y. Zhang and A. K. Mackworth. Modeling and analysis of hybrid con-trol systems: An elevator case study. In H. Levesque and F. Pirri, 112 editors, Logical Foundations for Cognitive Agents, pages 370-396. Springer, Berlin, 1999. [ZR96] S. Zilberstein and S. J. Russell. Optimal composition of real-time sys-tems. Artificial Intelligence, 82(1-2):181-213, 1996. 113 Equat ion Derivations This appendix goes through derivations for some equations used in this thesis, and details exactly how the mathematics used in the thesis relates to equations described in the SoccerServer manual. Rearrangement of the Equations of Dynamics In the SoccerServer manual, the equations of dynamics are given in the following form [CFH+02]: x" = x + v + a + r (7.1) v" - a • (v + a + f) (7.2) A little mathematical wrangling can produce equations that seem more con-sistent with traditional physics, yet are mathematically equivalent to the dynamics as presented in the manual. This process, which involves using a new interpretation of velocity, was used to derive the equations of dynamics used in this thesis. This thesis uses the variables x, v, and Ef to denote an object's position, velocity, and applied acceleration respectively. These variables are related to the manual variables in the following way: x = x (7.3) v - v + a + f (7.4) - / = S + f (7.5) m It is rather straight forward to transform the equations of dynamics into this variable space. Starting with Equation 7.1, 114 x' = x" (7.6) = x + v + a + r (7.7) = x + v (7.8) Next, we determine how the new velocity evolves based on Equation 7.2, using a drag coefficient fi — 1 — a instead of the decay parameter a. = v' + a' + f' = a • (v + a + r) + a + r' = ( 1 - M K + V m (7.9) (7.10) (7.11) It is important to note that the noise term, r, has been absorbed into the applied force — random noise applied to the object's trajectory is viewed as an exogenous force applied to the object by the environment. Modelling the RSL Visual Sensor The derivation of probability distributions that model the visual sensor are based on the following probabilistic interpretation of the ceiling operator: Pr(A = a | rA] = ral) = ' l if [a] e a + [0,1), 0 else. (7.12) Given this interpretation, we can replace the ceiling operator with a ran-dom variable whose distribution is governed by Equation 7.12. For example, Equa-tion 5.11 follows directly from this procedure. Deriving the distribution of possible ranges is the most complicated and is done in detail below as an example: WvW = ro 10 exp < q I l n r exp^g i l n r + A = exp{ln||f|| + B} + A = r exp B + A (7.13) (7.14) (7.15) (7.16) A and B represent unique random variables that have no relation to any other parameter discussed in this thesis. Their distribution follows directly from Equation 7.12: 115 „ , A , flO if a e 4(-l,l], Pr(A = a|||y||)= 2 0 ^ J ' (7.17) [0 else. Pr(B = 6|||y||) = 0 * 5 G 2 ^ 1 ] ' (7.18) [0 else. Given these two hypothesized random variables (A and B) the distribution for the actual range is determined as follows: Pr(R = r\Y = y) = fdaj deb Pr(e B = eb, A = a\\\y\\) Pr(r |A = a, e B = eb, \\y\\) (7.19) = J da]' deh q e~b Pr(A = a|||y||) Pr(r |A = a, e B = eb, \\y\\) (7.20) ' 20 f*&^ da / de6 g e"6 Pr(A = ||y|| - ebr|||y||) (7.21) = / deb 10 q e~b (7.22) m i n { e ! , M + ^ j = 10 q In } „ T \ (7.23) max je 2 , M i l - ^ -Deriving a Stochastic Transition Model for RSL Objects This section outlines the procedure of generalizing the equations of dynamics (Equa-tions 2.17 and 2.18) into a probabilistic state transition model. Starting with the equations of dynamics in the form xt+i = xt + vt = xt + (l- n)vt-i + — ft (7.24) m the state transition model can then be expanded by hypothesizing about the object's previous velocity and current applied force: P r ( X t + i | X t ) Y o , t ) = / Pr(Vt^\Xt,Y0..t) [ P r ( F t | X t , Y 0 : t , V t _ i ) x JVt-i JFt P r ( X t + 1 = x t + i | X t ) Y 0 : t , V t - i , F f ) (7.25) 116 Pr(F t = / t | X t = xt, Y 0 : t , V t _ i = (1 - rf-^xt+i - xt m 1 — ft))x P r ( V t _ i = (1 - l i ) - \ x t + l -xt- - / t ) | X t = x t , Y 0 : t ) d/t (7.26) It is likely that the probability of the object's decayed previous velocity will be known. For example, the agent's own distribution is perceived using Equa-tion 2.20. The applied force probability, however, is more intricate. Computing Optimal Trajectories Computing an optimal trajectory involves consideration of how an object's mo-mentum will evolve into the future. Fortunately, we have accurate equations of dynamics, so this can be done well. For a player, we assume that it will follow the optimal strategy of turning an appropriate amount and then linearly accelerating until it reaches its goal. Accordingly, we have the following predictions: Observing the above predictions, a clear pattern emerges, giving us the fol-lowing prediction equations: S" = S+v + (l - LI)V + Ef i x"> = x + v- (1 + 1 - A i + ( 1 - M ) 2 ) + t? x' = X + V / • ^ a + i - M + i ) (l-u,)v + ±f ( l - M ) ( ( l - M ) t 7 + i / ) + i / v-(l-pf + / • i ( l + ( l - / i ) + ( l - M ) 2 ) x{t + T) T-j-l f (7.27) (7.28) 117 Glossary of Variables World state variables for the RSL The notation for the world state variables follows the following design: x, v, 9, and 4> represent an object's position, velocity, heading, and head direction respectively. The specific object addressed is indicated as a superscript to that base variable. For example, the RSL world state is composed of the following variables: xa, X a The agent's own position, and a random variable over all possible agent positions. xvl, X p i The position of player i, and a random variable over all possible posi-tions for that player. xb, X b The ball's position, and a random variable over all possible ball posi-tions. va, V a The agent's own velocity, and a random variable over all possible agent velocities. yP*^ vpi The velocity of player i, and a random variable over all possible veloc-ities for that player. vb, V b The ball's velocity, and a random variable over all possible ball veloci-ties. 9a, 0 a The agent's own heading, and a random variable over all possible agent headings. 9P%, 0 p i The heading of player i, and a random variable over all possible headings for player i. (ha, $ a The agent's own head direction (relative to its body), and a random variable over all possible agent head directions. cfp1, <J>pi The head direction of player i, and a random variable over all possible head directions for player i. 118 x, X The conglomeration of all of the above variables (and random variables) into the world state vector. When dynamics need be explicit, a subscript to the above variables denotes the time step that the variable applies to. Observations of the world state In general, an observation of a particular variable is denoted using the variable y with the observed variable placed in superscript. For example, yQa is an observation of the agent's own head orientation, and yv is an observation of the player 2's velocity. Variables modelling agent behaviours The following variables are used to model the behaviours of other agents (i.e., not the observer) within an environment. The model, described in detail in Chapter 4, characterizes a group of p agents acting in an environment. A q A random variable addressing the action taken by observed agent q. This is a multi-nomial variable encompassing the various types of action the agent may perform (for example, Dash, Turn, and Kick in the RSL). ||A|| denotes the number of actions modelled, and the domain is the same for all observed agents. Aj(6) The modelled agent's action policy, stochastically providing an agent's ac-tion, given its current intention being i and current beliefs being b. All observed agent's have the same policy. See Section 4.4. B q A random variable addressing the beliefs of an observed agent q. The do-main of B q is the same as the world state, and the same for each observed agent.' (5 A parameter that specifies the dependence of an observed agent's belief, B q , on the observer's belief. See Equation 4.1. 119 I q A random variable addressing the individual intentions of an observed agent q. This is a multi-nomial variable whose domain size, ||I||, is the same for each observed agent and denotes the number of distinct propositions the observed agents may intend. Ij(b) Represents all agents' individual intention commitment policy. Specifically, it stochastically provides the next individual intention, Iq, given the agent's current belief being b, and the group's joint intention being j. However, a new intention can only be adopted if the previous intention is satisfied or abandoned. See Section 4.2.1. S q A random variable addressing the satisfaction or abandonment of agent g's current individual intention. If S q = T, then agent tj's intention has been satisfied or abandoned. Otherwise, it persists. Sj(6, T) Represents, indirectly, the sets of propositions that constitute each inten-tion. Specifically, it stochastically determines whether an agent's individual intention is satisfied or abandoned if it has intention i and beliefs b. J A random variable addressing the joint intentions of a group of observed agents. This is a multi-nomial variable whose domain size, ||J||, denotes the number of distinct propositions the observed group may intend. J(^ i:p) Represents the group's joint intention commitment policy. Specifically, it stochastically provides the next joint intention, J, given the group's current joint belief being b\-p. However, a new joint intention can only be adopted if the previous joint intention is satisfied or abandoned. See Section 4.2.1. T A random variable addressing the satisfaction or abandonment of the group of agent's current joint intention. If T = T, then the group's joint intention has been satisfied or abandoned. Otherwise, it persists. Ijibi-p) Represents, indirectly, the sets of propositions that constitute each joint intention. Specifically, it stochastically determines whether the group's joint intention is satisfied or abandoned if it has joint intention j and joint beliefs h-.p-Q In general, this random variable denotes the set of hidden random variables contained within a particular model. For the model of Chapter 4, Q =f {J,I1:p, A 1 : p , X , B 1 : p , S 1 : p , T } C, SI/ A separation of the hidden random variables in the model of Chapter 4 for the purpose of describing properties of an inference algorithm. C t =f {Jt, It1:p, A t 1 : p , X t}. * t d=f {Tt, S,1:p, Bt1:p}. See Section 4.5. 120 TT A parameter describing the initial probability distribution over the hidden random variables of the model. A d a p t i v e r e s o u r c e s c h e d u l i n g Most of the variables in this area deal with various time periods related to the internal and external clocks. These details are described mostly in Section 5.1. A, A m a x A random variable that addresses the amount of delay in a message delivered across the communication channel. The message may be a command from the agent to the environment, or stimuli from the en-vironment to the agent. A m a x is a parameter that specifies the largest possible delay. t*, t* The optimal time to command action within the agent's cycle with respect to the external clock, and the agent's current estimate of the optimal external time to command action respectively. r, f The temporal offset between the internal clock and the external clock, and the agent's current estimate of the temporal offset between these two clocks respectively. tprop The perception time of the ith. previous proprioception with respect to the internal clock. n e The number of action commands that have been effected. See Sec-tion 5.1.3. P, C Sets of pending and confirmed action commands, respectively. SoccerServer parameters See [Nod95, CFH+02] for a more complete listing of SoccerServer parameters. fj, The coefficient of friction for a particular object. See Equation 2.18, for example. 121 A quantization factor used to corrupt stimuli using Equation 2.21. 122 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items