Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Adaptive battlespace situation assessment using a hierarchy of reconfigurable Bayesian networks Mirmoeini, Farnoush 2005-12-31

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
831-ubc_2005-0560.pdf [ 2.95MB ]
Metadata
JSON: 831-1.0065415.json
JSON-LD: 831-1.0065415-ld.json
RDF/XML (Pretty): 831-1.0065415-rdf.xml
RDF/JSON: 831-1.0065415-rdf.json
Turtle: 831-1.0065415-turtle.txt
N-Triples: 831-1.0065415-rdf-ntriples.txt
Original Record: 831-1.0065415-source.json
Full Text
831-1.0065415-fulltext.txt
Citation
831-1.0065415.ris

Full Text

Adaptive Battlespace Situation Assessment Using a Hierarchy of Reconfigurable Bayesian Networks by Farnoush Mirmoeini B . A . S c , University of British Columbia, 2003  A THESIS S U B M I T T E D  IN P A R T I A L F U L F I L L M E N T O F  T H E REQUIREMENTS  FOR T H EDEGREE OF  Master of Applied Science in T H E F A C U L T Y O F G R A D U A T E STUDIES (Electrical and Computer Engineering)  The University of British Columbia September 2005 © Farnoush Mirmoeini, 2005  Abstract Situation assessment is the task of summarizing low-level sensor data in a battlefield environment to produce hypotheses suitable to use in command and control. In this thesis a novel algorithm is devised for adaptive multi-stage situation assessment using a hierarchy of Bayesian networks that are reconfigured on two timescales. Network Centric Warfare concepts are used in designing the situation assessment system. The formulation and algorithms presented are suitable for dynamic battlespace situation changes. Furthermore, algorithms are provided to model the battlespace conditions as a stochastic feedback system that uses the hypotheses generated by the Bayesian networks to make decisions. Numerical examples are provided to demonstrate the effectiveness of these algorithms.  ii  Contents Abstract  ii  Contents  iii  List of Tables  vi  List of Figures  vii  List of Acronyms  viii  Acknowledgements  xi  1 Introduction and Preview  1  1.1 Introduction  1  1.2 Thesis Overview and Contributions  3  1.3 Outline  7  2 Background and Literature Survey 2.1 Situation Assessment  9 9  2.2 Bayesian Networks  12  iii  2.3  Network Centric Warfare  16  2.4  Related Work  16  3 A Hierarchical Multi-Timescale Situation Assessment System Using Bayesian Networks 3.1  3.2  3.3  18  A Hierarchical Multi-Timescale Architecture for Situation Assessment System  19  3.1.1  Sensor System and Integrated Tracker  22  3.1.2  Hierarchical Bayesian Networks for Situation Assessment  28  Multi-Timescale Reconfiguration of the Bayesian Networks for Adaptive Situation Assessment  35  3.2.1  Reconfiguration of the Lower Level Bayesian Networks  35  3.2.2  Reconfiguration of the Higher Level Bayesian Network  40  Dynamic Situation Assessment with Smart Targets 3.3.1  Situation Assessment Based Decision-Making in Battlespace  43  3.3.2  Modeling the Response of Smart Targets to Decisions .  47  3.3.3  A Stochastic Feedback Algorithm to Model the Bat-  3.3.4  tlespace Dynamics  52  Markovian Analysis of Battlespace Dynamics  54  4 Numerical Examples 4.1  41  60  Numerical Examples for Reconfiguration of the Situation Assessment System  60  iv  4.2  Numerical Studies for the Decision Making System and Battlespace Evolution  64  5 Conclusion and future work  67  5.1  Conclusion  67  5.2  Future Work  68  Bibliography  70  Appendices  72  Appendix A The Junction Tree Algorithm  73  v  List of Tables 3.1  Higher and Lower Level Action Sets  3.2  A n example of the lookup table used in determining the variables that change after an action taken by friendly forces . . .  3.3  48  A n example of the lookup table used in determining the variables that change after an action taken by enemy forces . . . .  4.1  46  49  Mean Duration of the Battle and its simulated variance for various Starting States  66  vi  List of Figures 1.1  Structure of Situation Assessment System together with Feedback  3.1  Hierarchical architecture of the adaptive situation assessment  4  system with sensors, integrated tracking system and two levels of Bayesian networks 3.2  19  Network Centric Warfare information structure with three levels of information sharing  21  3.3  Three lower level Bayesian networks  30  3.4  Structure of Higher Level Bayesian Network in Different Modes. (P denotes Protagonist and A denotes Antagonist)  3.5  Structure of adaptive situation assessment system together with stochastic feedback and decision making systems  3.6  42  Interaction of an Abstract Situation Assessment and Decision Systems  4.1  33  47  Likelihood of Lower level Bayesian Network in parametrical reconfiguration  61  vii  Likelihood the three different structures of the higher level Bayesian network in structural reconfiguration  viii  List of Acrony SA  Situation Assessment  BN  Bayesian Network  MLE  Maximum Likelihood Estimation  NCW  Network Centric Warfare  Infostructure  Information Structure  EM  Expectation Maximization  C2  Command and Control  CPD  Conditional Probability Distribution  CPT  Conditional Probability Table  DAG  Directed Acyclic Graph  VE  Variable Elimination  JPN  Joint Planning Network  JDN  Joint Data Network  JCTN  Joint Composite Tracking Network  RWR  Radar Warning Receiver  ELINT  Electronic Intelligence  ix  IR  Infrared  ESM  Electronic Support Measure  JDL  Joint Directors of Laboratories  IFF  Identification of Friend or Foe  EA  Electronic Attack  MDL  Minimal Description Length  x  Acknowledgements I am grateful to Prof. Krishnamurthy for introducing me to such an interesting branch of applied mathematics and engineering, which, most probably, I would continue to work on for the rest of my career. I am also grateful to him for helping me learn the process of scientific writing through patiently going over numerous revisions of the papers that I wrote. The support for this work was partially provided by Defence Research and Development, Canada. I would like to thank the Countermeasure Techniques group, especially Ms. Christina O'Regan for their valuable comments and giving me the opportunity to work on their projects. My parents have let me make my own decisions since I was very young, and I am grateful for that. I thank my mother for always encouraging me to use my full potential and my father for encouraging me to be a well-rounded person. I thank my beloved Tissa for everything; things that I never feel I have enough space to write about.  FARNOUSH MIRMOEINI  The University of British Columbia September 2005  xi  Chapter 1 Introduction and Preview This chapter gives an introduction to our work in solving the problem of situation assessment in airborne conflicts.  1.1  Introduction  Situation assessment (SA) is the highest level of abstraction in understanding the conditions of a battlespace environment. It is the ongoing process of inferring relevant information about the forces of concern in a military situation. It is, in other words, the task of inferring "what is going on" or "what is happening" from data collected by sensors and processed by lower level data fusion and target tracking systems in a battlefield environment. We define a situation as the general state of the battlespace in the combination of circumstances and conditions at a given time. A situation consists of entities, attributes and relationships between these entities. Situation assessment or situation awareness is the task of identifying all entities within a situation and their relationships 1  and gaining knowledge and understanding about the battlespace. It is also the task of assigning significance and priorities to these entities and assessing how well each side is doing in a battlespace scenario. Situation assessment involves the task of making probabilistic inferences on the data collected by sensors. Advances in electronics and information systems together with advances in sensor networks and Network Centric Warfare have facilitated collection of large amounts of sensor data in a short amount of time in a battlefield. Moreover, the collected data in prone to contain errors and missing values due to sensor malfunction, enemy forces' deceptive tactics and resource sharing (i.e. a sensor is not always dedicated to observe a certain quantity). The information from hundreds of sensors cannot be reviewed and processed by human operators at the same rate as they are collected, therefore information systems are needed to summarize and fuse this incomplete and uncertain data into a number of comprehensible hypotheses that military decision makers can act upon. Moreover, as the battlespace situation changes with time, the assessment of the battlespace situation needs to be dynamically adapted to take these changes into account. Hence, an adaptive situation assessment system is needed that can be easily reconfigured as the battlespace situation changes. The goal of a situation assessment system is, therefore, to summarize and fuse the lower level data and generate up-to-date hypotheses about what situation the battlespace is in. A computational technique that is suitable in situation assessment should be capable of processing different types of uncertain, incomplete and possibly  2  conflicting types of data. It has to be able to reflect the dependence relationships amongst such data and be able to make inferences on it. Probabilistic reasoning using Bayesian networks is an artificial intelligence technique that satisfies all these requirements. Bayesian networks are an artificial intelligence technique for learning and reasoning with random variables. Bayesian networks (BN) are particularly useful in situation assessment. They are a graphical representation of probability distributions that can combine information and make probabilistic inferences, and are able to handle causality and uncertainty in data using Bayesian probability theory. Bayesian networks provide a formal method of reasoning over uncertain and partially missing information. A situation assessment system should be able to make inferences on uncertain and incomplete information rapidly and economically, and efficient algorithms for making inferences in Bayesian networks exist. Furthermore, a situation assessment system needs to be adapted to account for changes that occur in the battlespace, and algorithms for efficiently learning Bayesian networks using maximum likelihood estimation (MLE) exist. Bayesian networks therefore are the tool of choice to be used in an adaptive situation assessment system.  1.2  Thesis Overview and Contributions  The main contributions of this work are summarized below: • A novel hierarchical Bayesian network based situation assessment system (Figure 1.1) is developed that is able to be reconfigured and adapted to the changes that occur in the battlespace over the duration of the con3  Hypotheses (h) Higher-level Decision Maker  Higher-level Bayesian Networks  Evidence and Training Data  Evidence and Training Data Lower-level Bayesian Netowrks  Averaging Device  Higher-level Action an  Evidence  Lower-level Decision Maker  Integrated Tracking System Sensor Readings Sensor System  Lower-level Action O i  | Signals Targets and Battlespace Environment  Figure 1.1: Structure of Situation Assessment System together with Feedback  flict. The Bayesian networks used in situation assessment are hierarchical and are dynamically reconfigured in two timescales to cope with the changes that occur as the battle proceeds on two different timescales. It uses an Expectation-Maximization (EM) type algorithm for the reconfiguration of their parameters and structure.  This reconfiguration  occurs in two different timescales that correspond to two different levels of abstraction. We also show there is a need for the reconfigurable Bayesian networks to be used in situations assessment. The lower level Bayesian networks evolve on a fast timescale (typically a few seconds) and make low level hypotheses about the battlefield from data provided by an integrated target tracking system. These lower level Bayesian Networks have fixed connections and are reconfigured parametrically on a faster timescale than the higher level network. The higher level Bayesian network represents higher level command and control decision variables (such as enemy resources) that do not have known dependence relationships and the dependence amongst them changes with time. The higher level Bayesian network is reconfigured both parametrically and structurally, but on a slower timescale (typically minutes). This results in a hierarchical reconfigurable Bayesian network that is able to process the data in different levels of abstraction and is able to produce high-level hypotheses about what is happening in the battlespace and how well each side is doing. These hypotheses can be used in decision making as well as simulations. The situation assessment system presented here  5  uses a hierarchy of Bayesian networks for estimating hidden variables and making inferences. This situation assessment system uses a Network Centric Warfare (NCW) system, in which robust networks for information sharing exist. N C W uses robustly networked forces in order to achieve improved information sharing capabilities and thus a higher level of situational awareness [1]. These networks have a variable quality of service that affects the level of information sharing in the networks. The situation assessment system provided here also fits in the hierarchy of information networks that form the information structure (infostructure) of Network Centric warfighting enterprise provided in [1]. We extend the situation assessment system by designing a stochastic dynamic model of a battlespace. This model consists of a situation assessment based decision making system and Markovian model for friendly and enemy forces. The targets are assumed to be "smart" [14], that is to say they respond to the actions of the decision maker. The decision making system mimics a Command and Control (C2) system. The effects of these actions on the enemy are measured by sensors and the situation is assessed again and new hypotheses are generated. This process allows us to analyze the battlespace dynamics off-line and predict the probability of winning an engagement and also the expected time to complete an engagement. This decision system has also been used to show the effectiveness of our situation assessment system, and can be re6  placed with a human decision-maker or other more advanced automatic decision making systems. The architecture provided in this paper can be used in simulation of airborne conflicts, and also to construct a library of suitable modules of action under different battlespace circumstances.  1.3  Outline  This thesis is organized in five chapters. In the next Chapter we present a literature survey on situation assessment and Bayesian networks. We first talk about the concept of situation assessment and what it entails according to various researchers and then state our own definition of situation assessment. In the second section of the chapter we introduce Bayesian networks briefly and in the third section we discuss Network Centric Warfare. At the end of this chapter we review the previous work done in the area of situation assessment using Bayesian networks. In Chapter 3 we explain the architecture of the hierarchical multitimescale situation assessment system and provide information about each abstraction level and also discuss how Bayesian Networks are used in the situation assessment system. We then discuss the reconfiguration algorithms to adapt different levels of the situation assessment system to cope with changes that occur in the battlespace over time. Furthermore, we describe the decisionmaking system used in this research, and also mathematical analysis of the situation assessment and decision-making system. We present some numerical results and examples of our work in Chapter 7  4. These numerical results show that there is a need for reconfiguration in a situation assessment system as well as providing numerical examples of our algorithms. Finally, Chapter 5 contains some important conclusions of this research and also guidelines for future work.  8  Chapter 2 Background and Literature Survey In this section we present background information about several important concepts discussed in this thesis. We start by introducing the concept of situation assessment and stating its elements and properties. We then proceed to defining Bayesian networks and discuss the procedure of making inference on them and also learning them from data. Section 2.3 introduces Network Centric Warfare and its relationship to our work and Section 2.4 given a summary of related work done in the area of situation assessment using Bayesian networks.  2.1  Situation Assessment  In the previous chapter we introduced the concept of situation assessment and stated a working definition of it. However, it should be noted that there is 9  not a universal agreement on the nature of situation assessment and what it entails. Walz and Llinas [24] observed that: "the SA process is relatively ill defined and many elements enter into a situation. In general, however, we are attempting to perform a contextual analysis of (data fusion) level 1 products (position and identity of individual entities). This next level is thus concerned with force deployments and associated events and activities, knowledge derived from some type of pattern analysis applied to the Level 1 data. Moreover, this knowledge is coupled to a contextual background or setting that may involve both the physical and sociopolitical environments... The SA process therefore is concerned with what is happening and what events or activities are going to happen... it is focused on the  behavioral  aspects within the area of interest ... In SA force deploy-  ments are considered in the context of environmental effects (e.g. terrain and weather)...(SA) can also be thought of as a means to estimate the enemy battle plan; that is, what the enemy is doing (activities) and attempting to achieve (intent, goals)." Blackman and Popoli [3] observe that some authors assign different identities to situation and threat assessment. Walz and Llinas [24] also state that: "...whereas situation assessment established a view of activities, events, maneuvers, locations and organizational aspects of force elements and from that estimates what is happening or going to happen, whereas threat assessment estimates the degree of severity with which engagements are to occur."  10  Blackmail and Popoli [3] further state that: "The possible scope of situation assessment is enormous... situation awareness serves as a primary input into sensor management ... SA provides this contextual information (varying tactical requirements) for the sensor managements system ... SA serves to recognize patterns of behavior in tracked objects ... SA designs must strive to relieve workload of detailed control while preserving the ultimate control and authority of the operator ... a more overall role of SA is to act as a decision aid for the operator." They go on to state that feedback from a situation awareness capability to the underlying sensor and data fusion system would afford "far reaching" benefits: "Situation Assessment (SA) can assist in target acquisition performance by recognizing potential multi-element enemy tactics ... SA can help cue the sensor management system to look for as yet undetected likely participant ... SA can help recognize the relative unimportance of maintaining individual tracks ... SA could then aid the tracking data association process by authorizing a group track and interpreting conflicting spatially unresolvable ID signatures as a tactical identifier for the group as a whole. SA can recognize the importance of establishing the strength of a potentially attacking formation and cue the sensor management for a raid count assessment." There is, however, a lack of total system approach to the problem of situation assessment. In our research, we have tried to come up with a definition of situation assessment that entails as much of the most abstract quantities  11  of battlespace conditions as possible. This definition treats the battlespace situation as a quantity (though unobservable) that can be inferred from other observable quantities in a battlespace environment. This definition of situation assessment has led us to a situation assessment system that deals with these quantities in a hierarchical manner. This hierarchical architecture enables us to reconfigure different levels of abstraction in different timescales. We state our definition of situation assessment here once more: We define a situation as the general state of the battlespace in the combination of circumstances and conditions at a given time. A situation consists of entities, attributes and relationships between these entities. Situation assessment or situation awareness is the task of identifying all entities within a situation and their relationships and gaining knowledge and understanding about the battlespace. It is also the task of assigning significance and priorities to these entities and assessing how well each side is doing in a battlespace scenario.  2.2  Bayesian Networks  In the previous section we explained why Bayesian networks are useful in situation assessment. We now explain more about Bayesian networks and their properties and elements. Bayesian Networks are a powerful tool in making inferences from inconsistent and incomplete sources of data. They are a graphical representation of probability distributions that are capable to be learned from data. The capability of being learned from incomplete and possibly conflicting  12  sources of data, and also their capability to be reconfigured as the environment changes makes them an ideal candidate in situation assessment. A Bayesian Network (BN) is a model. It reflects the states of some part of a world that is being modeled and it describes how those states are related by probabilities. All the possible states of the model represent all the possible worlds that can exist, that is, all the possible ways that the parts or states can be configured. Typically, some states will tend to occur more frequently when other states are present. For example, if an unidentified aircraft is using fire control radar, it is more likely a fighter aircraft than a commercial carrier. Bayesian networks are networks of relationships. They encode the probabilistic relationship among several random variables. They consist of nodes and directed links. The nodes of a Bayesian network are random variables, which can be discrete or continuous (though we deal solely with discrete nodes in this work). The direction of the link arrows roughly corresponds to "causality" . That is the nodes higher up in the diagram tend to influence those below rather than, or, at least, more so than the other way around. More precisely, the direction of an arrow represents the conditional probability of inferring the state of a particular node given the states of its parent nodes. In addition to the graph structure, it is necessary to specify the parameters of the model. For a directed model, we must specify the Conditional Probability Distribution (CPD) at each node. If the variables are discrete, this can be represented as a table (CPT), which lists the probability that the child node takes on each of its different values for each combination of values of its parents. We can  13  now present a mathematically rigorous definition of Bayesian networks: A Bayesian network is a pair B =< G,Q >. G is a directed acyclic graph (DAG) that encodes the dependence relationships, in the sense that the nodes represent elements of U and the directed connections represent the conditional probability of inferring the state of a particular node given the states of its parent nodes U • G also encodes the conditional independence Xi  assumption that each node X is conditionally independent of the rest of the t  network given its direct parents i l x ; . © represents the set of parameters that quantify the network. For each node Xi it contains the conditional probability table (CPT) of each node, which in turn contains the conditional probability of a node given its parents. Parameters are of the form 9 \n . = P(xi\U ) for Xi  x  Xi  each possible value of X and Tixi • Therefore, a Bayesian network B defines a t  joint probability distribution over U that can be factored as follows: n  P (X ...,X ) B  u  n  = Y[p (Xi\U ) B  (2.1)  Xi  The most common task we wish to solve using Bayesian Networks is probabilistic inference.  Inference involves calculating the conditional prob-  ability of a set of nodes given the known state of another set of nodes in a Bayesian network. The underlying principle of the process of inference is Bayes Rule, which gives a way to calculate an unknown conditional probability value based on other known conditional probabilities. Let B\, B , B 2  a partition of a sample space S so that U" 1 =  n  be  Bi = S and Bi D Bj = 0 for all  0 < i,j < n. Suppose that event A occurs, we would like to know what is the probability of event Bk, that is, we would like to know P(Bk\A). According 14  to Bayes Rule we have [18]:  P(B \A) - - ^ j k  -  E  U  m  B  j  )  p  {  B  j  )  (2-2)  Bayes Rule is highly useful in making inferences in Bayesian networks and this is the reason they are named after Bayes. The simplest algorithm for making inference in Bayesian networks is the Variable Elimination (VE) algorithm, which uses Bayes rule directly to calculate conditional probabilities. Other more efficient algorithms exist that are used in making inference in Bayesian networks. One of the most efficient ones is the Junction Tree algorithm that is explained in the Appendix. Bayesian networks, as mentioned earlier in this section, have the capability to be learned and updated to model the current state of the world. Learning is one of the most important problems in Bayesian Network theory and involves the estimation of the probability tables and existence of connections from data. This is called learning. Learning Bayesian Networks has two aspects: parametrical learning and structural learning. Parametrical learning deals with determining parameter 0 or the conditional probability tables (CPTs) from data. Structural learning deals with learning the causal relationships that exist among the random variables that comprise the nodes of the Bayesian Network, and also determining which random variables are useful to be a node in a Bayesian Network. We will explain Bayesian network learning algorithms in the next chapter, where we use them to reconfigure the Bayesian networks in the situation assessment system.  15  2.3  Network Centric Warfare  Network Centric Warfare (NCW) is a new scheme introduced by the US Armed Forces Joint vision 2010. Recent developments in information technology have caused a new conceptual scheme to be proposed in order to achieve information superiority. Information superiority is a state that is achieved when a competitive advantage is attained from the ability to extract a superior information position [1]. Network Centric Warfare uses robustly networked forces in order to achieve improved information sharing capabilities and thus shared situational awareness. We use the concept of Network Centric Warfare in our research by assuming that a network of sensors and also a network for higher level information sharing exist.  2.4  Related Work  A number of papers discuss using Bayesian networks in Situation assessment. Kott et al. present a general overview of the situation assessment problem and discuss various research methods that can be used to approach the problem [13]. Bladon et al. advocate the usability of graphical models in situation assessment and show that they satisfy various requirements such as robustness [4]. Several papers present the use of Bayesian network in situation and threat assessment. Das et al. discuss the merits of Bayesian network technology as compared to other reasoning methods and introduce several scenarios and constricts Bayesian networks for them [7]. Das and Lawless [8] also propose  16  a truth maintenance system to check for information consistency in Bayesian networks when they are used in Situation Assessment. Nguyen discusses the application of Bayesian networks in threat assessment and breaks down threat assessment into capability and intent assessment [20]. He then introduces a sample network for intent assessment [20]. Okello and Thorns formulate the threat assessment problem and give a simplified but mathematically rigorous approach to it; using both discrete and continuous variables in the Bayesian network they propose [21]. Laskey and Laskey propose a Bayesian network approach to the problem of combat identification [16]. McMichael discusses a statistical approach to situation assessment using Bayesian networks and dynamic programming techniques [19]. Suzic uses Bayesian networks in enemy policy recognition [22]. All these researchers, however, treat situation assessment as a static problem and do not use any kind of reconfiguration. This results in situation assessment systems that are not a true representative of what goes on in a battlespace since the parameters and structure of these Bayesian networks are always constant. We also present a hierarchy of Bayesian networks for situation assessment which is able to be reconfigured on two timescales. The situation assessment system developed in our research also uses the broadest meaning on situation assessment, instead of focusing only on threat assessment or plan recognition. We also use the concepts of Network Centric Warfare in building our situation assessment system.  17  Chapter 3 A Hierarchical Multi-Timescale Situation Assessment System Using Bayesian Networks In this chapter we present the hierarchical multi-timescale situation assessment system we developed. We first explain the structure of such situation assessment system and then describe how the Bayesian networks in this situation assessment system can be reconfigured. In the third section of this chapter we present a stochastic feedback system that makes decisions in the battlespace.  18  Joint Planning Network (JPN)  Hypothess (r ) 4  Higher-level Bayesian Networks  Joint Data Network (JDN)  Evidence and Training Data  Evidence and Training D a t a { Y 3 )  Lower-level Bayesian Netowrks  Averaging Device  Evidence andTraining D a t a  fa)  Integrated Tracking System  Joint Composite Tracking Network (JCTN)  Sensor Readigs ( T I )  Sensor System  |.Signals  Targets and Battlespace Environment  Figure 3.1: Hierarchical architecture of the adaptive situation assessment system with sensors, integrated tracking system and two levels of Bayesian networks  3.1  A Hierarchical Multi-Timescale Architecture for Situation Assessment System  Consider a battlefield situation that comprises of friendly and enemy forces and their platforms, assets and interactions. At each time instance, sensors record measurements from the battlefield. The data collected by sensors is plentiful but usually low in information content; furthermore this data cannot be used directly in a situation assessment system because it is noisy and also 19  does not contain all quantities that represent the battlespace situation (since some quantities are not directly measurable and some are not observable all the time), therefore it needs further processing before it can be used in decision making. The adaptive situation assessment system consists of five functionality blocks, as outlined in Figure 3.1. The function of this hierarchical adaptive situation assessment system is to assess the situation by evaluating the likelihood of different hypotheses. The lowermost block is the sensor system, which collects data using both passive (e.g. Infrared) and active sensors (e.g. radar). The second block is the integrated tracking system (integrated tracker), which processes the sensor readings. The integrated tracker is mainly responsible for processing the incoming noisy sensor data and feeding it to the Bayesian networks for inference. We will explain the integrated tracker fully in the next section. There are two blocks of Bayesian networks. A l l of these Bayesian networks perform adaptive situation assessment. The reason we have two different levels of Bayesian networks is that these two levels operate on different timescales and therefore they cannot be integrated in one large Bayesian network. For this situation assessment system, we introduce a multi-timescale model where each block processes the data on a faster timescale than the ones above it. We call these timescales T i , r , r and r 2  3  4  (r <C r <C r <C r ), x  2  3  4  whose inverses correspond to the frequency at which the sensors input data to the integrated tracker, the frequency at which the integrated tracker outputs samples to the lower level Bayesian network, the rate at which the lower level Bayesian network feeds inferences to the higher level network and the rate at  20  Figure 3.2: Network Centric Warfare information structure with three levels of information sharing which the situation changes respectively. These timescales are shown in Figure 3.1. The function of the situation assessment system is to produce concrete and tangible hypotheses that can be used in command and decision making. These hypotheses are derived by summarizing lower level data from sensors and the integrated tracking system. The Bayesian networks are responsible for estimating the states of hidden variables as well as making inferences from data provided by sensors and the integrated tracking system. These Bayesian networks are reconfigured periodically in order to provide hypotheses that reflect the current state of the battlespace best. The following describes the components of this situation assessment system in detail.  21  3.1.1  Sensor System and Integrated Tracker  Sensors provide data about objects automatically or under direction. They can provide many kinds of attribute (such as type) and kinematic measurements (such as velocity and altitude) about a target. Sensors usually perform basic signal processing tasks to isolate desired information using prior knowledge about the statistics of the target energy and the interface energy to detect the presence of a target and its attribute and kinematic measurements [3]. We assume a variety of active and passive sensors such as radar, sonar, radar warning receivers (RWRs), electronic intelligence (ELINT) receivers, infrared (IR) sensors, electronic support measure systems (ESMs) etc. are available to us. The incoming sensor readings are fed into the integrated tracker. The integrated tracker represents all the systems that are used in processing the incoming data samples fed by sensors to make them suitable to use in Bayesian networks. The integrated tracker performs all the tasks required in order to prepare the sensor data to be used in the Bayesian networks. These tasks include filtering out the noise from the data, synchronizing the data since different sensors output data at different frequencies, data association in order to determine which piece of data belongs to which object, object identification, target tracking to establish the historic and future positions of an object and also level 1 data fusion in the Joint Directors of Laboratories (JDL) data fusion scheme [24]. Since the focus of our work is on adapting the Bayesian network component of the assessment system and also because of the fact that in a  22  real world scenario, sensors, trackers and data fusion systems are available; in this study we only model the output of the integrated tracker and make an abstract model of the lower layer of the adaptive situation assessment system. This abstraction is shown in Figure 3.1 by the dotted rectangle. To model the output of the integrated tracker we need to simulate kinematic and attribute data. We model the kinematic data by discretizing the state equation in a white noise acceleration model, where the corresponding target state vector z is [2] z = [U]  (3-1)  where £ is the target's position, £ is its velocity, and we assume all higher derivatives are zero. The target's time state equation is therefore, z(k + 1) = Fz(k) + Gu(k) + v(k)  (3.2)  and v is a zero-mean white noise acceleration sequence with known covariance and u is an unknown input modeling target maneuvers. We then model the target dynamics as a discrete finite-state Markov chain with small state space. The dynamics of the targets can be modeled as a Markov chain since all the previous states are summarized in the current state of the target, so at each time instant the state of the target depends only on its state at the previous time instant. Markov chains are stochastic processes in which the future of the process is independent of the past given the present, in other words we would have P(z(i + l)|z(i), z(z - 1),...) = P(z(i + l)|z(i)) 23  (3.3)  The transition probability matrix of a Markov chain is a square matrix that contains the one-step transition probabilities of a Markov chain. For more information on Markov chains see [23]. To model the attribute data we need to model the sensors' quality of service. We model the sensors' quality of service as discrete Markov chains as well. The data, however, may not be fully observable in a real world scenario. We model this by making the corresponding Markov chains only partially observable. Since some of the kinematic and attribute data are dependent on others, we need to model the output of the integrated tracker as dependent Markov chains and not independent ones. These Markov chains are dependent in the same manner as the variables that form the nodes of the Bayesian networks, and in the lower level Bayesian networks these dependence relationships are fixed and never change, so we generate samples from dependent Markov chains whose dependence relationship never change. We  assume a robust hierarchy of networks of information sharing exists.  These networks are shown in Figure 3.2. This sharing of information occurs in different levels where each level has a quality of services associated. The information sharing at the sensor is done via the Sensor Network or the Joint Composite Tracking Network (JCTN). As shown in the figure, these networks work on the same timescales as the levels of our situation assessment system does. We indicate the qualiy of service of the J C T N level by  P J C T N -  The next  level of information sharing is the Joint Data Network (JDN). We assume this network has a variable quality of service  24  P J D N -  The quality of service of the  JDN and JCTN,  P J D N  and P  J C T N  are modeled as discrete random variables  that the data generated by the integrated tracker should be weighted according to. The quality of service of JCTN affects the correctness of information that is provided by the integrated tracking system. If P is more reliable, and if P  J C T N  J C T N  is high, the information  is low, the information provided by the integrated  tracker is likely to contain errors. Furthermore, P  J D N  provides the quality of  service at the Joint Data Network level. This quality of service is used to evaluate the correctness of data samples at the output of the averaging device. Similar to P P J D N  J C T N ,  a high P  J D N  means more reliable data samples and a lower  indicates more erroneous data. Figure 3.1 shows how the situation assessment system provided here  and the Network Centric Warfare infostructure relate. We assume a network of sensors and also a network linking the integrated tracking system (which in itself is a network of various data processing systems, as previously noted) and sensors exist. These are included in joint composite tracking network. The data samples generated by the integrated tracking system together with the hierarchy of Bayesian networks form the joint data network (JDN). The hypotheses that are generated by the higher-level Bayesian network and also other planning system that might be using these hypotheses form the joint planning network. The output of the integrated tracker is a stochastic process that at each instance of time contains samples of random variables that are used as the lower level Bayesian networks' nodes. These random variables are chosen to  25  be the quantities that represent the battlefield situation best. Depending on the computational resources available and the desired level of detail we can choose the number of lower level Bayesian networks r and the number of variables in each, n, where 1 < i < r. We represent the set of these variables with U ; = {Xi, ...,X .}. n  In our design we have chosen r to be 3 and rii — 17,  n,2 = 8 and n = 6. The Bayesian networks that comprise of these variables 3  are elaborated in the next section and are shown in Figure 3.3. The sets of variables that comprise these Bayesian networks are called U i , U 2 and U . 3  U i consists of target's intent, class, ID, altitude, heading, range, range rate (speed), air-to-air (AA), air-to-surface (AS) and fire control (FC) radars, communications, identification of friend or foe (IFF) squawking, whether or not a flight plan is submitted that matches the track, whether or not the target is coordinating with other targets, whether or not spoofing and other deceptive tactics are used and finally the threat level. The variables U  2  are strength  of forces, disposition of forces, resources of the antagonist, number of options available to them, firepower, mobility and maneuverability. Finally U 3 consists of our use of decoys and dummies (DD), our use of diversionary tactics (DT), our use of electronic attack (EA), antagonist's electronic intelligence (ELINT), concealment, and also the amount of information the antagonist have. These variables are usually some form of average taken over all of antagonist's arms, e.g. firepower would be a weighted average of the firepower of all the arms available to the antagonist. It should be noted that this is merely an example to show how a situation assessment system using Bayesian  26  networks is constructed. Based on the computational resources, the specific battlespace conditions and the degree of detail, a designer can choose any set of variables that represent the battlespace situation best. At each instant k, the integrated tracker outputs a data sample x which fc  is a vector that contains instances of the variables mentioned above. These data samples are used as training data for reconfiguring the Bayesian networks; furthermore, they can be used as evidence posted on the Bayesian networks to make inferences as to what would happen later on in the battlespace if certain conditions are met. The integrated tracker also feeds some data samples to the higher level Bayesian network directly via the averaging device. The reason is that these data samples do not need any processing in the lower level Bayesian networks after the integrated tracker has processed them. Therefore they are passed directly to the higher level Bayesian Network. The function of the averaging device is to synchronize these data samples (by taking averages) in order for them to arrive at the higher level Bayesian network at the same rate at which the data samples from the lower level Bayesian network arrive. The function of the integrated tracker, in summary, is to process sensor data and to make it suitable to be used in the Bayesian Networks. We next discuss how these data samples provided by the integrated tracker are used in the Bayesian Networks.  27  3.1.2  Hierarchical Bayesian Networks for Situation Assessment  The training data and evidence generated by the integrated tracker (see Figure 3.1) are fed into the lower level Bayesian networks.  In this section we  give an overview of Bayesian networks and how they are used in our adaptive situation assessment system.  We use Bayesian networks in the adaptive sit-  uation assessment since they can represent dependence relationships between uncertain and incomplete information, which is otherwise very hard to capture. Moreover, in a situation assessment system we need to make inferences based on the data provided by sensors, and efficient algorithms for doing so in Bayesian networks exist; furthermore the inferred probabilities are consistent with Bayesian probability theory. Also, Bayesian networks can be learned from data and be tuned if necessary, and this is needed in assessing battlespace situation that is a time-varying situation. Moreover, it is quite easy to extend the Bayesian network by inclusion of new variables and connections. Bayesian Networks are able to handle incomplete data and hidden variables, which is needed in a situation assessment system because not all variables that represent the battlespace situation are observable. Although the integrated tracker processes and summarizes the sensor data, it is not able to make inferences and handle hidden variables, so we need to use Bayesian networks to perform these tasks. For clarity, we repeat the definition of a Bayesian network here. We formulate a generic Bayesian network (in the higher or lower level) for sit-  28  uation assessment as follows. Consider the set of n random variables, U = {Xi,X },  where each Xi represents a variable output by the integrated  n  tracker.  As stated in the previous chapter, a Bayesian network is a pair  1  B =< G, 0 >. G is a directed acyclic graph (DAG) that encodes the dependence relationships, in the sense that the nodes represent elements of U and the directed connections represent the conditional probability of inferring the state of a particular node given the states of its parent nodes. G also encodes the conditional independence assumption that each node X j is conditionally independent of the rest of the network given its direct parents. 0 represents the set of parameters that quantify the network. For each node X j it contains the conditional probability table (CPT) of each node, which in turn contains the conditional probability of a node given its parents. Parameters are of the form 0 \u . = P(xi\H ) for each possible value of X j and n ^ . Therefore, a Xi  x  Xi  Bayesian network B defines a joint probability distribution over U that can be factored as follows: n  P ( X , . . . , X „ ) = HP (Xi\n .) S  1  B  x  (3.4)  i=l  Various algorithms exist to make inferences in a Bayesian network. In this paper we use the junction tree algorithm (see appendix). It makes inferences by constructing a tree of cliques from the Bayesian network's D A G and carrying out a message passing algorithm on this tree [12]. This algorithm in fully explained in the Appendix. Capital letters, such as X, Y, Z denote variable names, and lowercase letters, x,y, z denote specific states of these variables. Sets of variables are denoted by boldface letters, 1  X , Y , Z.  29  Intent  Range  Comms  IFF  Threat  (c) Bayesian network to assess threat  Figure 3.3: Three lower level Bayesian networks 30  Lower Level Bayesian Networks The training data and evidence generated by the integrated tracker (see Figure 3.1) are fed into the lower level Bayesian networks. In our design we use a fixed structure for these networks. These networks are shown in Figure 3.3. The variables chosen to represent these Bayesian networks are quantities with known and fixed dependence relationships. The parameters 0 of these networks, however, can change, and we need to adapt these parameters to changes that occur in the battlespace. We compute these parameters using the MLE learning algorithms introduced in the next section. The purpose of the lower level Bayesian networks is to infer the variables used in the higher level networks. Since it is not possible to know the state of these variables from the processing devices inside the integrated tracker, we use these lower level Bayesian networks in order to determine their states. This is done as follows: we estimate the state of the hidden variables using an MLE algorithm. We first learn the parameters of the lower level Bayesian networks using the EM algorithm from the data provided by the integrated tracker. We then make inferences about the hidden variables in the lower level Bayesian networks based on the samples we have from the observed nodes using the current CPTs for the lower level Bayesian networks. This results in a probability vector that corresponds to the probability of the hidden variable being in each state it can take. We then pick the highest probability and assign the corresponding state to the hidden variable, i.e. we choose index j for which p^- is the largest and set the value of Xi to be equal to Xj. Now  31  we use these data together with the data fed by the integrated tracker via the averaging device to learn the parameters and structure of the higher level Bayesian network. This way we have all the data we need to reconfigure and learn both the higher and lower level Bayesian networks in the adaptive situation assessment system. Higher-Level Bayesian Networks  The higher level of the situation assessment system consists of a Bayesian network that represents the most abstract quantities in the battlefield. The dependence relationships (existence of arcs in the corresponding Bayesian Network) among these variables, as well as the degrees of these dependencies (parameters in the corresponding Bayesian Network) are not fixed and change with time. These changes, however, happen very slowly with frequency A = ^ ( T <C Ti). We assume there are three different states in a battlespace scenario. 3  These states are  offence in  action,  in which the friendly forces are engaged in a defensive action  and  defence  passive  which the friendly forces are engaged in an offensive  in which neither the friendly forces nor the enemy forces are en-  gaged in an active conflict. The higher level Bayesian network's structure is a direct function of the state of the battlespace, and therefore it changes when the state of the battlespace changes. The three possible structures for these Bayesian network are shown in Figure 3.4. In these networks, P and A indicate friendly forces (protagonist) and enemy forces (antagonist) respectively. The random variables that represent the nodes in this Bayesian network are the friendly and enemy forces' 32  P's resistance  P's potential damag  Situation  (a) Offence Condition  P's resources  P's info  P's threat  (b) Defence Condition  P's resources  Threat on P  (c) Passive Condition  Figure 3.4: Structure of Higher Level Bayesian Network in Different Modes. (P denotes Protagonist and A denotes Antagonist) 33  potential damage, threat that is posed by them by the other party, the amount of information available to them, their amount of resistance, their resources, and finally situation. This would amount to n = 11 nodes. All these nodes are discrete-valued and are all partially observable except for the node situation that is completely hidden. The variable situation contains information about how well the friendly forces are doing in the battlespace conditions. Our goal in this stage is to estimate the state of the battlespace situation, in other words, we need to determine how well we are doing. The state of situation together with the information available to us from inferring other variables as desired would give a thorough assessment of the battlespace situation and enable us to generate relevant hypotheses about the battlespace conditions. The training data for some of these nodes is inferred from the lower level Bayesian networks. The data for others (except the node situation) is fed from the integrated tracker through the averaging device. The averaging device computes the average of each incoming batch of data fed to it. The purpose of this device is to make sure the data samples fed to the higher level Bayesian network arrive at the same frequency. Since the state of the battlespace is time-varying, we need to adapt both the structure and the parameters of the higher level Bayesian network. We explain how this adaptation is done in the next section.  34  3.2  Multi-Timescale Reconfiguration of the Bayesian Networks for Adaptive Situation Assessment  In this section we present the M L E algorithms used in learning and reconfiguration of the situation assessment system. We first present the parametrical re-estimation algorithm in the context of a Bayesian network at the lower level; and then explain the reconfiguration algorithm used at the higher level network. The algorithms for re-estimating the parameters is the same in the higher and lower level Bayesian networks, and the algorithm to determine the correct structure is only executed at the higher level.  3.2.1  Reconfiguration of the Lower Level Bayesian Networks  At each instant k, the integrated tracker outputs a data sample x which fc  is a vector of dimension n. As data samples x from the integrated tracker fc  accumulate, the likelihood of the current model drops from its maximum since we calculate the likelihood of the model given the current data, but the models parameters had been maximized with respect to previous set of data, therefore after a while we need to change 0 to get a model with an acceptable fit. This is done by learning the parameters of the Bayesian network periodically using the Expectation-Maximization (EM) algorithm. The E M algorithm is an iterative optimization approach that can be used to estimate some unknown parameter 35  0 given data [9]. As every data batch of size N arrives from the integrated tracker, we run the M L E algorithm explained below to recalculate 0. We then use the new network B =< G, 0 > (where G is fixed) as the current model until the next N data samples accumulate. We use Algorithm 1 to re-estimate the parameters of the Bayesian network.  Algorithm 1 Parametrical Re-estimation Algorithm Set 0 to be the initial parameter set. Read data case x . If k mod N = 0 Run the E M algorithm over the last N data cases to get new 0 Output 0 fc  Next we explain how the parameter 0 can be updated using the E M algorithm. Given a training set D = { x , x ^ } of ./V data samples, we would like to find 1  the network B that best matches the data in D. The log-likelihood of B given  D is N  (3.5) k=l  since G is fixed. Since we have missing values and hidden variables in our system, we need to compute the maximum likelihood approximation for the network parameters, 0 using some iterative approach. Similar to Friedman [10], Chickering and Heckerman [6] and Lauritzen [17], we employ the E M algorithm to find a local maximum. It is worthwhile to introduce the concept of sufficient statistics [6] before we proceed to explaining the re-estimation algorithm. Let iVx(x) be the number of cases where X = x, we call N the sufficient statistics 36  for X. When Xi and all its parents are observed, the computation is the trivial addition of zeros and ones. Otherwise we need to use a Bayesian network inference algorithm to compute the probabilities. In this research the junction tree algorithm [12] has been used. For more information on this algorithm see the Appendix.  £ ( |D,9,B)(^x ,n ) !  p  x  i  I i  indicates the expected sufficient statistics  given the current state of parameters, the model and the data. If all data points x in the training set D are complete (i.e. they describe fc  the values of all variables in each sample), it can be shown [11] that the likelihood is maximized when  N. n x  »..ln., =  (36)  In the case of incomplete data, however, we need to calculate the parameters 0 using the EM algorithm. The EM algorithm enables us to estimate the most likely state of the hidden variables based on the states of the observable variables. We first explain what the EM algorithm is, and then proceed to describe how it is used in calculating the parameters of a Bayesian network from incomplete data samples using expected sufficient statistics instead of real sufficient statistics. The EM algorithm is a technique to find maximum likelihood estimate from incomplete data. Consider we have a probability density function, p(x\Q) that is based on a set of parameters, 0 . Further, consider we have a data set  X of size n which consists of data samples  {x\,X2,  •••,x } from this density. n  In the maximum likelihood estimation problem, our goal is to maximize the parameters that maximize a likelihood function. The maximum likelihood 37  parameter estimate is defined as 6* = a r g m a x L t e i X )  (3.7)  where 0 is the parameter, 6* is the maximum likelihood estimate of the parameter, which we wish to find, and L is the likelihood of the parameters given data, and is defined as n  L(e\x) = (x\e) P  = l[p(x \e) i  (3.8)  1=1  Usually, however, instead of directly maximizing L(Q\X) we maximize log(L(Q\X))  since using logarithms makes working with equations easier. De-  pending on the form of p, different approaches to the finding the maximum likelihood estimate can be taken. The E M algorithm is one of such techniques. It is used for data sets that are incomplete. It wasfirstintroduced by Dempster [9] and is a general way of estimating the maximum likelihood parameters for data sets that are incomplete or have missing values. We assume the observed data is an incomplete data set X sampled from an underlying density p(x\Q). We define Z as to be the complete data set that contains X and also the values missing from it, Y. In other words, Z = (X, Y) The joint p.d.f. for Z would therefore be: p(z\e) = ((x,Y)\Q)= (Y\x,e)p(x\e) P  P  (3.9)  The equation above is a result of the chain rule of probability. We can now define the complete data likelihood to be L(6\Z) = L(Q\X, Y) = p(X, Y\Q). 38  (3.10)  We now explain the E M algorithm which consists of two steps [9], the Expectation or .E-step and the Maximization or M-step. We first explain the expectation step and then proceed to explaining the maximization step. In the expectation step, we calculate the value of complete (auxiliary) likelihood with respect to the unknown data given the observed data. We define the complete (auxiliary) likelihood as Q(G,& -V) k  where 0(  fc_1  = E{\og(p(X,Y\Q))\X,Q^}  (3.11)  ) is the current estimate of the parameters and 0 is the new set of  parameters. The M-step involves maximizing the expected auxiliary likelihood over all value of 0 . In mathematical terms we have: 0  (fc)  = argmaxQ(0,0 - ). (A;  1)  (3.12)  The E M algorithm is an iterative algorithm, and these steps, namely the expectation and maximization steps are repeated until the likelihood converges to a local maximum. This algorithm is guaranteed to converge to a local maximum, which may or may not coincide with the global maximum. If some of the variables that comprise the nodes of the Bayesian networks are not observable at all times (or if they are completely hidden), we need to compute the expected sufficient statistics of the unobserved variables. This is done in the expectation step of the E M algorithm. To compute the expected sufficient statistics we compute N  £p(x|oAB)(-/Vx ,n ) = i  Ij  ^p(^,n |x ,^73) fc  xi  k=l  39  (3.13)  where x is a data sample. To compute this we use the junction tree algorithm fc  explained in the appendix to make fast and efficient inferences. We then calculate the parameters using the expected sufficient statistics used in place of real sufficient statistics in the usual formula in (3.13) to find the MLE of the parameters. This is the maximization step of the EM algorithm. The maximum likelihood estimation of the parameters therefore can be calculated via the following formula [11]: Ep( \Dft,B)(N . ) 7jr —rX  Vxi\n  Xi  x tn  = -p  T  (3.14)  It can be shown that the sequence of these iterations of expectation and maximization is guaranteed to converge to a local maximum [9]  3.2.2  Reconfiguration of the Higher Level Bayesian Network  We now explain how we adapt the higher level Bayesian network to cope with the changes in the battlespace. As mentioned at the beginning of the section, the algorithm for re-estimation of the parameters of this network is the same as the one for the lower level. Therefore, we directly proceed to describing the structural reconfiguration algorithm. To decide which structure of the higher level network (out of the three shown in Figure 3.4) best represents the battlespace, we evaluate the minimal description length (MDL) score of all three networks and pick the one that results in the highest score. The MDL score is calculated using the following 40  formula [15]. score(B\D) = L(B\D) - ^ | ^ # ( B )  (3.15)  where #(B) is the number of parameters in the network and N is the size of the training set. It should be noted that the size of training set, N has to be large in order for the M D L score system to produce a meaningful result. We chose TY > 100 in our numerical studies. Overall, the following algorithm can be used to reconfigure the higher level Bayesian network, where N is the size of the training sets (or batches) that we perform parametrical and structural reconfiguration on. We call this Algorithm 2.  Algorithm 2 Structural Re-estimation Algorithm Set 0 and G to be the initial parameters and structure respectively. Read data case x . If k mod N = 0 Run the E M algorithm over the last N data cases to get new 0 Evaluate the M D L scores of all possible structures and set G to be the one with maximum score Output 0 and G fc  3.3  Dynamic Situation Assessment with Smart Targets  In this section we model the battlespace as a stochastic feedback system in which decisions are made based on the situation assessment reports produced 41  Hypotheses (h) Higher-level Decision Maker  Higher-level Bayesian Networks  Evidence and Training Data  Evidence and Training Data Lower-level Bayesian Netowrks  Averaging Device  Higher-level Action au  Evidence  Lower-level Decision Maker  Integrated Tracking System  Sensor Readings Lower-level Action a t  Sensor System  | Signals Targets and Battlespace Environment  Figure 3.5: Structure of adaptive situation assessment system together with stochastic feedback and decision making systems  42  by the Bayesian networks. These decisions are therefore constrained by the inferences that the Bayesian Networks in the Situation Assessment system generate. Our main goal is to analyze the dynamics of a simulated battlespace that uses hypotheses generated by the situation assessment system to choose the next action to take. We simulate the actions taken by enemy forces using the concept of smart targets. Smart targets are targets whose behavior can change based on the actions we take. We examine the average duration of the battle and also the probability of winning for each side. This modeling can help in training before battle and determining the best strategy under different circumstances by building a library of all possible courses the engagement can take andfindingthe optimal courses of action that would help the friendly forces win the engagement in the shortest amount of time. Wefirstdescribe how hierarchical decisions are made, then proceed to discuss the effect of these actions and also the effect of the behavior of smart targets on the battlespace environment. We also present an algorithm for modeling the dynamics of the battlespace andfinallywe examine the expected duration of the engagement and the probability of winning for each side.  3.3.1  Situation Assessment Based Decision-Making in Battlespace  In this section we describe the process of decision-making in a simulated battlespace. These decisions are based on the situation assessment reports produced by the Bayesian networks. The hypotheses that are generated by the  43  situation assessment system (see Figure 3.5) are fed to the higher level decision making systems and a high-level decision is made. This decision together with the state of variables from the lower level Bayesian networks is inputted to the lower level decision system which makes a decision. The lower level decision together with the actions taken by the smart targets affects the battlespace. The changes in the battlespace are detected by the sensors and inputted to the integrated tracker to be summarized and prepared for the next stage of situation assessment. We first describe how hypotheses are constructed and how decisions are made using the hypotheses generated by the situation assessment system. Bayesian networks are responsible for making inferences on the data provided by the integrated tracking system. At each instant of time, they output a vector containing the state of all variables used as their nodes. To generate hypotheses, some (or all) of these variables are picked that describe the battlespace situation best. Each combination of their states is then formed, and is called a hypothesis. Alternatively, different combinations of states could result in a single hypothesis, but the approach that assigns one single hypothesis to each combination of states is the most general case. We define a hypothesis,  (where k is he time index) to be a vector  consisting of these variables, i.e. hfc = (situation, potential damage, threat, strength, information)  (3.16)  The total number of different hypotheses is therefore equal to the product of the number of states these variables can take. If situation takes mi states, 44  potential damage takes m states, threat takes m states, strengths takes m 2  3  4  states and information takes m states, the total number of hypotheses m 5  would be 5 m  ~ W  mi  (3-17)  i=i  When the hypothesis has been chosen, it is deterministically mapped to a set of actions. There are two sets of actions, where one set corresponds to the variables at the lower level decision system and the other to the higher level. The higher level decision is based on the states of the variables at the higher level Bayesian network. The decisions at the lower level are based on the states of the variables at the lower level Bayesian network together with the decision made at the higher level. We map these m hypotheses deterministically to the set of higher level actions. By a deterministic map we mean there is no randomization in the process of mapping, and a certain hypothesis is always mapped to a certain action. The higher level action set consists of four actions. The higher and lower level actions are summarized in Table 3.1. The higher level action set, in turn, is mapped to the lower level action set. The lower level action set consists of actions that are less abstract and deal with more tangible and physical quantities. The lower level decision set consists of fourteen actions. Each action at the higher level action set can be translated to several actions in the lower level. For example the action attack at the higher level can be followed by any of the six actions at the lower level. See Table 3.1 for more detail. Lower level actions are, therefore, chosen based  45  on the higher level decisions and also the state of variables at the lower level Bayesian networks. These actions are less abstract than the actions at the higher level and directly affect the targets and battlespace environment. Only one lower level action is taken at each decision epoch. Lower-level decisions affect the targets, their responses and other physical entities in the environment. They also trigger a response from the enemy forces. This response will also change the physical attributes and entities in the battlespace environment. After the enemy forces respond, we again collect data samples through the integrated tracker and assess the situation again (Figure 3.5).  Table 3.1: Higher and Lower Level Action Sets Action Set A A Front-Door Penetration back-door penetration anti-radiation missile Attack particle beam high-energy laser laser zapper launch decoys jamming Defend hide diversionary tactics reconnaissance Gather Information surveillance damage assessment Do Nothing Do nothing L  H  46  Deterministic Map Hypotheses Situation Assessment System  Decision System  t  Targets and Battlespace Environment Probabilistic Map Figure 3.6: Interaction of an Abstract Situation Assessment and Decision Systems  3.3.2  Modeling the Response of Smart Targets to Decisions  In this section we describe how the decision taken by us and the actions of the smart targets affect the battlespace. We first explain how our decisions affect the battlespace and then proceed to explaining how smart target's behavior changes the battlespace.  Figure 3.6 shows an abstraction of the situation  assessment system (leftmost box) and the decision system (rightmost box). This structure results in a stochastic feedback system whose randomness is caused by the effect of actions taken by the friendly and enemy forces on the battlespace. As shown in the figure, the only probabilistic mapping is between actions and targets, and the rest of mappings are deterministic. A decision affects the battlespace attributes Vi with probability pi, where pi depend on various quantities such as environmental conditions, the  47  effectiveness of the response and so on.  denoted the variable V* in the  Bayesian network j where 1 < j < r where r is the number of lower level Bayesian networks and in our case r = 3 and also 1 < i < nj where Uj is the number of nodes in Bayesian network j. pijh denotes the probability of V^ going to state Sijk (that is, state k for variable Vij). We now give an example of a part of the lookup table (see Table 3.2). We assume we take lower level decision a; which affects variables V23 and V51 that have 2 and 3 states respectively. The table shows that the action an causes V23 to go to state 1 with probability 0.3, to state 2 with probability 0.2 and to state 3 with probability 0.5. It should be noted that the variable V23 is in one of these states already since it can only take one of these three states. This table would go on to list all the actions and the variables affected by them and the probabilities of their new states. To save space, we only gave an example of how the table is constructed instead of including the entire table. It should be noted that the numerical values in Tables 3.2 and 3.3 are for illustration purposes only. Table 3.2: A n example of the lookup table used in determining the variables that change after an action taken by friendly forces  Action ai  Variables affected Variable affected Probability  New state  Pijk  0.3 0.2 0.5 0.2 0.8  an V  51  48  1  2 3 1 2  We now model the response of enemy forces that we assume are smart targets. We expand the definition of smart targets from [14] (who defines them as targets that are capable of hiding from sensors) to include targets whose behavior changes with the action we take. The response of these smart targets is modeled as completely random and the policy that governs their response is not modeled. The reason is if there are policies that govern smart targets' behavior, they are not known to us, so we can view their decisions as if generated by a completely random source. A similar table (see Table 3.3) is constructed to reflect the effect of enemy's action on the battlespace environment. The table is similar to the previous table (Table 3.2). The only difference is that we are not aware of the enemy's decision and the course of reasoning behind it. The only thing we observe is the effect of the enemy forces' decision on the battlespace variables. For the purpose of simulation, we generate these responses probabilistically, but do not infer about the enemy action nor we simulate their decision-making system and how they percept the battlespace. Table 3.3: A n example of the lookup table used in determining the variables that change after an action taken by enemy forces Variables affected by enemy Variable affected Probability Pijk 0.3 0.2 0.5 0.2 v 0.8 51  49  forces New state Sijk 1 2 3 1 2  This assumption is justified since in a real theatre we only see the outcome of enemy forces' actions and are not aware of how they make their decisions or how they perceive the battlespace. These lower level decisions affect the environment probabilistically. The effect of these decisions on the battlespace environment is detected by the sensors and other sources of gathering information and after being processed by the systems that are summarized in the integrated tracking system. We now model the effects of the actions that enemy and friendly forces take on the battlespace situation. We introduce a simple algorithm to deal with extracting information from the lookup table. This algorithm chooses the most likely hypothesis and then chooses the best higher and lower level action based on that. It then uses the lookup table that we explained previously to choose the new state of affected variables and their new states. It would also extract the probability of destroying an enemy aircraft from the table.  Without loss of generality, we assume that the responses from the friendly and enemy forces happen at even intervals. So if in total N data samples are needed from the integrated tracker, the first y come from the decision the friendly forces have made and their effect on the battlespace environment and the last y  a r e  generated by the response that the enemy forces  have given to the action taken by the friendly forces. These iV data samples  50  A l g o r i t h m 3 Algorithm for Using the Lookup Tables Read the states of the hypothesis variables from the output of the higher level Bayesian network. Determine the most probable hypothesis using the state of hypothesis variables. Choose the high level decision from high level action set A (Table 3.1) based on the hypothesis. Choose the low level decision from the low level action set A L (Table 3.2) based on the high level action and the state of variables in the lower level Bayesian Networks. Find what battlespace variables are affected from the action chosen in previous point from the lookup Table 3.2 and their new states and probabilities. Do the same for the effect generated by enemy forces, from Table 3.3. H  are fed into the lower level Bayesian network by the integrated tracker. We assume all these N data samples are equally valuable, and the ones that have been generated at an earlier time are given the same weight as the ones that are generated at a later time. The friendly and enemy forces' actions also affect the number of friendly and enemy aircrafts. We assume that a is the number of friendly aircrafts and k  Pk is the number of enemy aircrafts. These quantities are Markov chains dependent on the hypotheses and each other, and are explained more thoroughly in the next section. The Transition probability matrices of these Markov chains are also stored and do not change. Every time a decision is made, these Markov chains are iterated once to give the updated values of the number of aircrafts a and P . fc  k  51  3.3.3  A Stochastic Feedback Algorithm to Model the Battlespace Dynamics  We now present the algorithm we devised to model battlespace dynamics as a stochastic feedback system. We described a feedback system that models the battlespace dynamics in the previous section. Using this feedback system we can model the responses of both parties in the battlespace and explore various courses of action. From this simulation, we can also predict the outcome of a battle based on the actions taken by friendly and enemy forces. This information can be stored in a library and be used in battlespace simulations and in real world engagements as a reference. In should be noted that all probability values and also variables can be changed without hurting the structure and design of the algorithm itself. The probability values and variables that are used to form the hypotheses or are affected by certain decisions can all be changed according to expert input and also tailored to suit the circumstances in different battlespace conditions, and  are the number of enemy and friendly aircrafts respectively. N is the  number of samples in each learning batch used by the lower level Bayesian networks and M is the batch size used in learning the higher level Bayesian network. Algorithm 4 enables us to model the battlespace dynamics and changes that occur as time passes. It further enables us to model the actions taken by friendly forces and smart targets and examine their effects on the battlespace. This will help us explore different courses of action in a battlespace situation 52  A l g o r i t h m 4 Algorithm for Modeling the Battlespace Get N data samples from the integrated tracker While {B > 0 and a > 0) For i = l : M Execute Algorithm 1 to find the parameters of the lower level networks Find the M L E for missing nodes and take the average over N samples End Execute Algorithm 2 to find the parameters and structure of higher level BN. Find the state of any missing variable. Take the average over the M data points. Execute Algorithm 3 to get new states of variables that are changed. Generate y data samples. Reduce the number of enemy aircrafts according to the Markov chain. To model enemy response, look up the affected variables and their new state from the lookup table. Generate another y samples. Reduce the number of friendly aircrafts according to the Markov chain. Feed the resulting N samples to the lower level Bayesian network. End. k  k  53  and also to plan the battle beforehand by exploring what would the most probable outcome of certain actions be.  3.3.4 Markovian Analysis of Battlespace Dynamics In this section we present a mathematical model that helps us analyze the dynamics of the simulated battlespace. This mathematical model is based on the stochastic feedback system presented in the previous section. We first discuss this mathematical model and then proceed to examining some battlespace quantities such as expected duration of the battle and probability of winning for each side. We model the feedback system as a vector Markov chain. A sample vector Markov chain is shown in (3.3). The reason we can model the behavior of this stochastic feedback system as a Markov chain is as follows. We made decisions based on the inferences made by Bayesian networks. These Bayesian networks do not add any delay to the data samples provided by the integrated tracking system. The integrated tracking system in turn does not add any delay to the data provided by the sensors. Furthermore, the actions chosen by the decision systems are in the same timescale as the hypotheses generated by the Bayesian networks. Based on this, the actions in one decision epoch are in phase with the sensor readings, because the integrated tracking system and the Bayesian networks do not add any delay to the data and only process it. The effect of the decisions on the battlespace environment, however, is taken into account in the next decision epoch, so the state of these readings depend  54  on the decisions made in the previous epoch and therefore the state of sensor readings in the previous epoch in a probabilistic manner. (This is because the decisions affect the battlespace probabilistically. See Figure 3.6.) Therefore, we can say that the state of sensor readings is Markovian. We can say the same for the variables that form the nodes of the Bayesian networks, since they are a deterministic function of the sensor readings, they too are Markovian and their current states depend only on their previous states. In the rest of this thesis, we analyze the behavior of this vector Markov chain. This vector consists of all the variables that form the nodes of the lower and higher level Bayesian network, and therefore the analysis entails all possible combinations of states these variables can take. However, due to the fact that we do not possess computational resources to deal with a transition probability matrix of a very large dimension, we limit our analysis to some and not all variables present in the Bayesian networks. Based on our design we choose variables that have the highest impact on the battlespace environment. These variables can be regular variables that were used in the Bayesian networks in addition to the variables that indicate the number of enemy aircraft (Pk) and the number of friendly aircraft (a ), where k is the k  time index. The variables that comprise the states of the Markov chain are the same variables used in hypothesis generation vector h in (3.16) augmented k  with the two quantities a and Pk- We denote the resulting process Z . k  Z = (h ,a ,p ) fc  k  k  k  (3.18)  The augmented vector process, Z , is also a Markov chain since the states of 55  the two variables a  and B also only depend on their previous values and the  k  k  previous state of the battlespace, Since we include all the information about the battlespace at any time k in the hypothesis  we have the following  equalities:  P{h \h , k+1  a , p , hfc-i, afc-i,  k  k  ...) = P(h \b. , a , B )  k  k+1  k  k  k  P(p \h ,a ,Pk,h ^,a ^i,p _i,...) k+1  k  k  k  k  k  (3.19)  k  •••) = P(a +i\h ,&k,Pk)  P(ak+i\^k,Oi ,a ,h -i,ak-i,Ph-i, k  k  = P(p \h ,a ,  k  k+l  k  (3.20)  k  k  p)  (3.21)  k  The first of these equalities hold since at any given time k, all the current information about the battlespace is summarized in the hypothesis, h  k  and  this hypothesis depends on the action taken by the friendly and enemy forces in the previous epoch, which in turn depends on the hypothesis (i.e. the state of battlespace) at time k — 1 and the number of enemy and friendly aircraft. In other words, the past states of this process are summarized in the current state of the process, and therefore, it is a Markov chain. The same reasoning can be made for a  k  and B , and we can conclude that those are Markov chains as k  well. Based on this, the augmented process, Z is also a vector Markov chain. This results in a vector Markov chain with a transition probability matrix of size m x (cv + 1) x (/? + 1) since we need to take include states with 0  0  { 0 , o j } and { 0 , @ } . We next calculate two important statistics from the 0  0  resulting Markov chain, namely the expected duration of the engagement and the probability of winning for each side.  56  Expected Duration of the Engagement The problem now is to estimate the expected time till absorption of this Markov chain. The absorbing states are states in which at or  are zero.  An absorbing state is a state that if the Markov chain enters it, it would remain there forever. The absorbing states in our problem are the states at which either a or j3 are zero. The reason for this is that when a side comk  k  pletely dies out the battles stops. The expected time till absorption, therefore, would be the expected duration of the battle. The mean time till absorption of a Markov chain can be calculated as follows: s-l  Vi = 1 + ^2 PijVj  for i = 0 , l , s - l  (3.22)  3=0  where Vi is the expected duration of the battle if we start in state i and P is the Markov chain transition probability matrix. We now explain how this is derived [5]. Let {Z } n  Markov chain with states 0, 1,  be a finite state  N and transition probability matrix P.  From these states, suppose 0,  s — 1 are transient (i.e. P^ —> 0 as n —> oo  for 0 < i,j < s) and states s,  N are absorbing (i.e. Pa = 1 for s < i < N).  This Markov chain has transition probability matrix that takes the form  Q S  \ (3.23)  I  0  where 0 is an (JV — s + 1) x s matrix of zeros, I is an (TY — s + 1) x (JV — s + 1) identity matrix and Qij = Pij for 0 < i, j < s. We define T, the random absorption time to be T = min{n > 0; X > s} n  57  (3.24)  We also define Vi to be the expected time till absorption starting at state i. It would therefore take the following form: Vi = E[T\X = i]  (3.25)  0  The expected time till absorption starting at state i would be:  Vi = 1 + ^2 H J P  V  * = °> !> •••>  for  S  _  1  (- ) 3  26  P r o b a b i l i t y of W i n n i n g Another quantity of interest is the probability of winning the battle for each side. It is calculated using the invariant distribution of the Markov chain presented. The invariant distribution of a Markov chain with transition matrix P is calculated using the system of linear equations below: N  *i = ^KkPki  for i = 0,1  iV  (3.27)  fc=0  and 7T + 7Ti + ... + n 0  = 1  N  (3.28)  The probability of winning for friendly forces (p i )c&n now be calcuw  n  lated as Pwin  = ^2 ^  *' @  k  m  s  ^ ^ * a  e  1 S z e r 0  (3.29)  The invariant distribution indicates the probability of finding the process in any of the states after the process had been in operation for a long time. Since the Markov chain in our problem has both transient and absorbing states, the probability of the process being at the transient state after a 58  long time is by definition zero and the only the probabilities of the absorbing states are nonzero. Since all the states for which a or B are positive k  k  are transient, and the states for which a or B are zero are absorbing, the k  k  invariant distribution would be equivalent the distribution of probabilities of the battlespace being at each of the recurrent states. Based on this, summing over the states for which a is positive results in the probability of winning k  for friendly forces, since we know either a ovB has to be zero, and if a is k  k  k  not, B has to be zero, and the sum of these probabilities would indicate the k  probability of a being positive, that is the probability of winning for friendly k  forces.  59  Chapter 4 Numerical Examples In this chapter we present some numerical results that illustrate the effectiveness of our approach in adaptive situation assessment. We first discuss numerical results for the situation assessment part, and then proceed to presenting numerical studies for decision-making and Markov analysis section.  4.1  Numerical Examples for Reconfiguration of the Situation Assessment System  Our aim in this section is to explain the test bed that was developed to test the situation assessment system. To make a test bed we needed to generate the data samples at the integrated tracking system's output (see Figure 3.1). These data samples were generated as dependent Markov chains, and the dependence relationship among them resembled the structure of the lower level Bayesian networks. This assumption is justified since we assume the depen60  CD  C  co  JD  co _c o  CO  .O JC o  03 CD CO CO  "O o o  CD  Batch Number  (a) Likelihood of the Parameters of the Bayesian Network Model to Assess Threat as Each Batch of Size N = 100 is Learned  +  -fc+ -  •  T3 O  +  O  -iioo  >v>  -1150  -  -f  +++  h  40  50  90  100  Sample Number (b) Likelihood of the Parameters of the Bayesian Network Model to Assess Threat as each new data sample comes in from the Integrated Tacking System  Figure 4.1: Likelihood of Lower level Bayesian Network in parametrical reconfiguration 61  -400 defensive offensive passive -600  1-1  o -800  0 0  -1000  •ti T3 O O  J  -1200  -1400  -1600  50  100  150  200  250  300  350  400  Sample Number Figure 4.2: Likelihood the three different structures of the higher level Bayesian network in structural reconfiguration dence relationships among the Bayesian network's nodes are fixed and reflect the true relationship among these quantities in the real world. We also added noise to these data samples according to PJCTN  and  PJDN,  and  PJCTN  PJDN-  We modeled both  the quality of service of the Joint Composite Tracking Net-  work and the quality of service of the Joint Data Network, respectively, as discrete random variables with states {1,2,3,4,5}. A higher value for  PJCTN  means less noise and a low value means more noisy data samples at the output of the integrated tracker. Similarly  PJDN  deals with the quality of service of  the output of the averaging device. In the first example, in which we test the reconfiguration algorithms for the lower level Bayesian networks, we assume 62  1 1  « 100,  1 1  f« 100. The ratio  ^ is not of much importance since we made an abstract model of the sensor and integrated tracking systems and do not care about their interaction. (See Figure 3.1.) We assume the nodes resources, threat and information are hidden in the lower level Bayesian networks and the node situation is hidden in the higher level Bayesian network (Figures 3.3 and 3.4 respectively). Also, we chose N, the size of the batch of data samples used in reconfiguration of the lower level Bayesian networks to be 100 in both higher and lower networks. We also assume  PJDN  (page 22) to be a discrete random variable with states  {1,2,3,4,5}. This latter quantity,  PJDN  denotes the quality of service of the  Joint Data Network. We added more noise to the data batches (of size N) a further amount of noise based on the value of  PJDN-  This quality of service  value is constant during each batch (of N samples), but changes over batches. We use Algorithm 1 to reconfigure the parameters of lower level Bayesian networks. Figure 3.7(a) shows the plot of the likelihood of one of the Bayesian networks at the lower level to assess the threat (Figure 3.3) as each batch of size N — 100 arrives from the integrated tracker and the parameters are reestimated. The likelihood of the model remains relatively constant throughout a given situation (although the model parameters 0 themselves might change). As the situation changes, however, the likelihood changes significantly as we learn each batch until the situation again reaches a steady value. Figure 3.7(b) illustrates the likelihood of the same model between two learning periods as each new data sample arrives. We can see that the likelihood drops rapidly from the maximum and would continue to decrease if we do not re-learn the  63  parameters. Similar curves would result if we execute this algorithm on any of the higher or lower level Bayesian networks. To test the structural reconfiguration algorithm for the higher level Bayesian networks of the situation assessment system we generated data for different battlespace scenarios. These scenarios were defensive, offensive or passive, and our aim in this example was to choose the correct scenario for the battlespace. To generate each different scenario, we took samples from the corresponding Bayesian network (from the three different higher level Bayesian network structures shown in Figure 3.4) and then executed the reconfiguration algorithm in Section 3.2.1. In this example, the batch size iV is 100. We simulated data for a defensive scenario that evolved into a passive state, after which it became an offensive scenario and then it went back to defensive state. Figure 3.8 shows the M D L score of all three possible structures of the higher level Bayesian network. As shown in the figure, the algorithm always picks the correct state of the battlefield.  4.2  Numerical Studies for the Decision Making System and Battlespace Evolution  We now present numerical examples for the stochastic feedback system and Algorithm 4 here. We assume initially both an and 3Q are equal to 4, i.e. each side has four aircrafts. We also assign m,\ = 4 and m = mz = m = 2  4  = 3  which gives us the total number of hypotheses of m — 324. The states are  64  valued as {0, ...,ra; — 1} for each i. It should be noted that a higher number for a state indicates a better condition, for example a "very good" situation is shown with 4 whereas a "bad" situation is indicated by 0. Similarly, a high state value for threat indicates low threat and a small state value means high threat, which is a bad condition. In this case the total number of states would be 5 x 5 x 324 = 8100. The problem now is to estimate the expected time till absorption of this Markov chain. This is done using (3.22). The absorbing states would be the same, i.e. states in which either a or B are zero. We k  k  calculate the mean duration of the battle for several initial states (see Table 3.4). The average duration of the battle together with its variance for different initial states is listed in the table. The variance is calculated over 50 runs. As shown in the table, the mean duration of the battle is maximum when the friendly forces start at a moderately "good" state and smallest when they start in a "bad" state. We also calculate the probability of winning for each side, which is the invariant distribution of the Markov chains. It turns out that the probability of winning (3.29) for friendly forces p i is 0.784 and the W n  probability of winning for enemy forces is 0.216 regardless of what state we start in.  65  Table 4.1: Mean Duration of the Battle and its simulated variance for various Starting States  Initial State Z (4,2,2,2,2,4,4) (0,0,0,0,0,4,4) (2,1,1,1,1,4,4)  Numerical Results Calculated Mean Duration 9.23 6.72 13.12  66  Simulated Variance 1.28 0.97 2.11  Chapter 5 Conclusion and  future work  In this section we present some important conclusions to this work and also give a possible roadmap for future research.  5.1  Conclusion  In this paper we have presented a situation assessment system that is capable of adapting itself to the changes that occur in the battlespace environment. We have used Bayesian Networks in this situation assessment system and developed novel algorithms for their learning, reconfiguration and adaptation to changes in battlespace environment as time passes. This situation assessment system uses the concept of Network Centric Warfare and is consistent with the Network Centric infostructure in [1]. We have also designed a stochastic feedback system which makes decision that are constrained by inferences made by the situation assessment  67  system. These decisions are made on two levels of abstraction using the hypotheses provided by the Bayesian networks. This feedback system provides a dynamic model of the battlespace that is capable of tuning and also predicting the outcomes of different actions. Furthermore, we have introduced a Markov chain analysis of the battle and provided numerical examples to show the effectiveness of our approach in battlespace modeling and decision making.  5.2  Future Work  Following the investigations described in this thesis, a number of projects could be taken up. These include: 1. Using game theoretical concepts in designing the decision making systems: Game theory enables us to optimize a cost (reward) function by choosing the sequence of actions of both sides. In doing this, we need to define a suitable cost function and use game theory to find the optimal course of action. It would also be interesting to see if there would be absorbing states other than states at which either enemy or friendly forces are completely wiped out. 2. Designing a more intelligent decision making system: At present, the hypotheses are mapped to actions deterministically: A way can be devised to do this probabilistically and dynamically, so that a certain hypothesis is not necessarily followed by a certain action. 3. Designing a control system that helps us minimize the mean duration of  68  the battle while keeping the probability of winning for the friendly forces above 0.5. This, in conjunction with a probabilistic map of hypotheses to decisions can give us a more intelligent decision making system that is working towards the goal of winning the battle in the shortest amount of time.  69  Bibliography [1] D. S. Alberts, J. J. Garstka, and F. P. Stein. Networkc Centric Warfare: Developing and Leveraging Information Superiority. CCRP, Washington,  DC, 2000. [2] Y . Bar-Shalom and X . L i . Estimation and Tracking: Principles, Tech-  niques and Software. Artech House Publishers, Norwood, M A , 1993. [3] S. Blackman and R. Popoli.  Design and Analysis of Modern Tracking  System. Artech House Publishers, Norwood, M A , 1999. [4] P. P. Bladon, R. J. Hall, and W. A . Wrigh. Situation assessment using graphical model. In Proc. Fifth International Conference on Information  Fusion, pages 886-893, Annapolis, MD, July 2002. [5] P. Bremaud. Markov Chains, Gibbs Fields, Monte Carlo Simulations and  Queues. Springer, New York, NY, 1998. [6] D. M . Chickering and D. Heckerman. Efficient approximations for the marginal likelihood of bayesian networks with hidden variables. Machine Learning, 29(2-3):181-212, 1997. [7] S. Das, R. Grey, and P. Gonsalves. Situation assessment via bayesian belief networks. In Proc. Fifth International Conference on Information  Fusion, pages 664-671, Annapolis, MD, July 2002. [8] S. Das and D. Lawless. Trustworthy situation assessment via belief networks. In Proc. Fifth International Conference on Information Fusion,  pages 543-549, Annapolis, MD, July 2002. [9] A . Dempster, N . Laird, and D. Rubi. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society,  39(B):l-38, 1977. 70  [10] N . Friedman. Learning bayesian networks in the presence of missing values and hidden variables. In Proc. Fourteenth International  Machine Learning (ICML),  Conference  on  pages 125-133, Nashville, T N , July 1997.  [11] D. Heckerman. A tutorial on learning with bayesian networks. Technical Report MSR-TR-95-06, Microsoft Research, Redmond, WA, June 1996. [12] C. Huang and A . Darwiche. Inference in belief networks: a procedural guide. International  Journal of Approximate  Reasoning, 15:225-263,  1996. [13] A . Kott, M . Pollack, and B. Krogh. The situation awareness problem: Toward a research agenda. In Proc. DARPA-JFACC Symposium on Advances in Enterprise  Control, San Diego, C A , November 1999.  [14] C. M . Kreucher, D. Blatt, A . O. Hero III, and K . Kastella. Adaptive multimodality sensor scheduling for detection and tracking of smart targets. In Proc. 2004 Defense Applications of Signal Processing (DASP) Workshop, November 2004. [15] W. Lam and F. Bacchus. Learning bayesian belief networks: A n approach based on the mdl principle. Computational Intelligence, 10:269-293, 1994. [16] G. Laskey and K . Laskey. Combat identification with bayesian networks. In Proc. Command  and Control Research and Technology  Symposium,  2002. [17] S. L. Lauritzen. The em-algorithm for graphical association models with missing data.  Computational  Statistics  and Data Analysis,  1:191-201,  1995. [18] A. Leon-Garcia. Probability and Random processes for Electrical  Engi-  neering. Prentice Hall, 1993. [19] D. McMichael. A statistical appoach to situation assessment. In Proc. Second International  Conference on Information  Fusion, Sunnyvale, C A ,  July 1999. [20] X . T. Nguyen. Threat assessment in tactical airborne environment. In Proc. Fifth International  Conference on Information  1307, Annapolis, M D , July 2002. 71  Fusion, pages 1300-  [21] N . Okello and G. Thorns. Threat assessment using bayesian networks. In Proc. Sixth International  Conference on Information  Fusion, pages 1102—  1109, Cairns, Australia, July 2003. [22] R. Suzic. Representation and recognition of uncertain enemy policies using statistical models. In Proc. of the NATO RTO Symposium on Military  Data and Information Fusion, Prague, Czech Republic, October 2003. [23] H. M . Taylor and S. Karlin.  An Introduction  to Stochastic  Modeling.  Academic Press, London, UK, 1998. [24] E. Walz and J. Llinas. Multisensor Data Fusion. Artech House Publishers, Norwood, M A , 1990.  72  Appendix A The Junction Tree Algorithm The junction tree algorithm is an extremely efficient algorithm to make inferences in Bayesian networks. We use it in our simulations to make inferences in the Bayesian networks and also in the expectation step of the E M algorithm (13) in Algorithm 1. The problem with variable elimination algorithms is that every time an inference needs to be made, we need to evaluate the sums and products again, which is inefficient. The Junction Tree algorithm allows us to make simultaneous inferences on a Bayesian network using secondary structures. A junction tree T for graph G has the following properties: • Each node in T is a cluster (nonempty set) of variables. • There is exactly one path between each pair of clusters. • For each two cluster A and B in T, all clusters on the unique bath between A and B also contain An B.  73  • Each clique C of G is included in at least one of the clusters. Now that we defined junction trees we can proceed to introducing the junction tree algorithm. This algorithm consists of three phases: Construc-  tion, initialization and propagation. The construction step of the algorithm involves constructing the junction tree from G, the graphical structure of the Bayesian network. This is done as follows: 1. Construct the moral graph by adding undirected arcs between all pairs of co-parents, i.e. for nodes A, B and C where A G 7r(C) and B E n(C) add an edge between A and B (hence moralizing the graph) and then removing all directionality from the edges, so that the new graph, Q is m  an undirected graph. 2. Triangulate the graph Q by adding edges so that every cycle of length m  four or larger contains an edge that connects two non-adjacent nodes in the cycle. Triangulation can be done in any arbitrary way. The goal, however, is to obtain a triangulated graph. We call this triangulated graph, Q . T  3. Identify the cliques in Q . Cliques of a graph are defined as its complete T  subgraphs. A complete graph is a graph for which an edge exists between every pair of its nodes. 4. Form the junction tree by assigning the cliques obtained in the previous step and adding an edge between nodes if the intersection of the two 74  cliques in nonempty. These edges are labeled by the members of the intersection. Therefore, all nodes in the junction tree are labeled by set of nodes in the original Bayesian network. If the graph contains cycles delete enough edges to eliminate these cycles. The resulting graph is the junction tree. There are a number of papers that deal with a more systematic procedure of deleting edges and triangulation, and also build optimal junctions trees. For more information see [12]. After the junction tree in constructed, the next task is to introduce the numerical components of the junction tree. This step is called initialization. It is done using the following steps: 1. For each cluster X, construct a table (j>x{x) and initialize it to 1. 2. Assign each variable V to a cluster X that contains X and all of its parents (the triangulation procedure guarantees there is such a clique). Let <t>x = <t>xP(V\Il ) v  (A.l)  Where by fly we mean parents of V in the original Bayesian network. The third and final step of this procedure is message passing between cliques. After we initialize the junction tree we perform global propagation to achieve global network consistency. We first describe the procedure for single message passing. Consider two adjacent cliques, C and D joined by separator S and their associated potentials (f>c, 4>D and </>s. Assign <t>f = <f>  s  75  (A.2)  and then change  <fis as <t>s =  c-s  *c  (A.3)  Now we assign a new table to D, as follows:  * D  (A.4)  =  To perform global propagation, we start at an arbitrary clique C and execute two phased of passing n — 1 messages in a junction tree with n cliques. The global propagation algorithm is as follows: 1. Choose a clique C. 2. Unmark all cliques. Call  Collect-Evidence(C).  3. Unmark all cliques. Call Distribute-Evidence(C). Collect-Evidence(C) 1. Mark C. 2. Call Collect-Evidence recursively on C's all unmarked neighboring cliques. 3. Pass a message from C to the cliques which invoked Collect-Evidence(C). Distribute-Evidence(C) 1. Mark C. 2. For each D where D is an unmarked clique neighboring C pass a message from C to D.  76  3. For each D where D is an unmarked clique neighboring C call DistributeEvidence recursively. The result of this algorithm is that each clique is able to pass information to other cliques and therefore maintain a seamless transition of information. When the consistent junction tree is available, we can marginalize for any desired variable, in other words, we can find P(V) for each desired node V in the original Bayesian network. This is done as follows: 1. Identify a clique C that contains V. 2. Calculate P(V) by marginalizing (j>c according to the following formula: (A.5) X-{V}  77  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065415/manifest

Comment

Related Items