{"@context":{"@language":"en","Affiliation":"http:\/\/vivoweb.org\/ontology\/core#departmentOrSchool","AggregatedSourceRepository":"http:\/\/www.europeana.eu\/schemas\/edm\/dataProvider","Citation":"https:\/\/open.library.ubc.ca\/terms#identifierCitation","Contributor":"http:\/\/purl.org\/dc\/terms\/contributor","Creator":"http:\/\/purl.org\/dc\/terms\/creator","DateAvailable":"http:\/\/purl.org\/dc\/terms\/issued","DateIssued":"http:\/\/purl.org\/dc\/terms\/issued","Description":"http:\/\/purl.org\/dc\/terms\/description","DigitalResourceOriginalRecord":"http:\/\/www.europeana.eu\/schemas\/edm\/aggregatedCHO","FullText":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","Genre":"http:\/\/www.europeana.eu\/schemas\/edm\/hasType","IsShownAt":"http:\/\/www.europeana.eu\/schemas\/edm\/isShownAt","Language":"http:\/\/purl.org\/dc\/terms\/language","Notes":"http:\/\/www.w3.org\/2009\/08\/skos-reference\/skos.html#note","PeerReviewStatus":"https:\/\/open.library.ubc.ca\/terms#peerReviewStatus","Provider":"http:\/\/www.europeana.eu\/schemas\/edm\/provider","Rights":"http:\/\/purl.org\/dc\/terms\/rights","RightsURI":"https:\/\/open.library.ubc.ca\/terms#rightsURI","ScholarlyLevel":"https:\/\/open.library.ubc.ca\/terms#scholarLevel","Title":"http:\/\/purl.org\/dc\/terms\/title","Type":"http:\/\/purl.org\/dc\/terms\/type","URI":"https:\/\/open.library.ubc.ca\/terms#identifierURI","SortDate":"http:\/\/purl.org\/dc\/terms\/date"},"Affiliation":[{"@value":"Non UBC","@language":"en"}],"AggregatedSourceRepository":[{"@value":"DSpace","@language":"en"}],"Citation":[{"@value":"Haukaas, T. (Ed.) (2015). Proceedings of the 12th International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP12), Vancouver, Canada, July 12-15.","@language":"en"}],"Contributor":[{"@value":"International Conference on Applications of Statistics and Probability (12th : 2015 : Vancouver, B.C.)","@language":"en"}],"Creator":[{"@value":"Barker, Kash","@language":"en"},{"@value":"Nicholson, Charles D.","@language":"en"},{"@value":"Ramirez-Marquez, Jose E.","@language":"en"}],"DateAvailable":[{"@value":"2015-05-15T20:27:32Z","@language":"en"}],"DateIssued":[{"@value":"2015-07","@language":"en"}],"Description":[{"@value":"Network resilience to a disruption is generally considered to be a function of the initial\nimpact of the disruption (the network\u2019s vulnerability) and the trajectory of recovery after the disruption\n(the network\u2019s recoverability). In the context of network resilience, this work develops and compares\nseveral flow-based importance measures to prioritize network edges for the implementation of\npreparedness options. For a particular preparedness option and particular geographically located\ndisruption, we compare the different importance measures in their resulting network vulnerability, as\nwell as network resilience for a general recovery strategy. Results suggest that a weighted flow\ncapacity rate, which accounts for both (i) the contribution of an edge to maximum network flow and (ii)\nthe extent to which the edge is a bottleneck in the network, shows most promise across four instances\nof varying network sizes and densities.","@language":"en"}],"DigitalResourceOriginalRecord":[{"@value":"https:\/\/circle.library.ubc.ca\/rest\/handle\/2429\/53196?expand=metadata","@language":"en"}],"FullText":[{"@value":"12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 1 Vulnerability Importance Measures Toward Resilience-Based Network Design Kash Barker Assistant Professor, Industrial and Systems Engineering, University of Oklahoma, Norman, OK, USA Charles D. Nicholson Assistant Professor, Industrial and Systems Engineering, University of Oklahoma, Norman, OK, USA Jose E. Ramirez-Marquez Associate Professor, Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, USA ABSTRACT: Network resilience to a disruption is generally considered to be a function of the initial impact of the disruption (the network\u2019s vulnerability) and the trajectory of recovery after the disruption (the network\u2019s recoverability). In the context of network resilience, this work develops and compares several flow-based importance measures to prioritize network edges for the implementation of preparedness options. For a particular preparedness option and particular geographically located disruption, we compare the different importance measures in their resulting network vulnerability, as well as network resilience for a general recovery strategy. Results suggest that a weighted flow capacity rate, which accounts for both (i) the contribution of an edge to maximum network flow and (ii) the extent to which the edge is a bottleneck in the network, shows most promise across four instances of varying network sizes and densities. Resilience, broadly defined as the ability to stave off the effects of a disruption and subsequently return to a desired state, has been studied across a number of fields, including engineering (Hollnagel et al. 2006, Ouyang and Duenas-Osorio 2012) and risk contexts (Haimes 2009, Aven 2011), to name a few. Resilience has increasingly been seen in the literature (Park et al. 2013), recognizing the need to prepare for the inevitability of disruptions. Figure 1: Graphical depiction of network performance, \ud835\udf11(\ud835\udc61), over time (adapted from Henry and Ramirez-Marquez (2012)). Figure 1 illustrates three dimensions of resilience: reliability, vulnerability, and recoverability. The network service function \ud835\udf11(\ud835\udc61) describes the behavior or performance of the network at time \ud835\udc61 (e.g., \ud835\udf11(\ud835\udc61) could describe traffic flow or delay for a highway network). Prior to disruption \ud835\udc52\ud835\udc57, the ability of the network to meet performance expectations is described by its reliability, often considered to the likelihood of connectivity of a network. Research in the area of recoverability is related to understanding the ability and speed of networks to recover after a disruptive event, similar in concept to rapidity in the \u201cresilience triangle\u201d literature in civil infrastructure (Bruneau et al. 2003). Emphasis in this paper is placed on the vulnerability dimension, or the ability of \ud835\udc52\ud835\udc57 to impact network performance in an adverse manner is a function of the network\u2019s vulnerability (Nagurney and Qiang 2008, Zio et al. 2008, Zhang et al. 2011), similar in concept to 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 2 robustness in the \u201cresilience triangle\u201d literature. Haimes (2006) broadly offers that that the states of the system are described by a state vector, suggesting that vulnerability is multifaceted (i.e., certain aspects of a system may be adversely affected by certain events and not others). Our work adopts this qualitative perspective, though we assume that the vulnerabilities found in the different aspects of the network can still be measured by changes in a single network service function \ud835\udf11(\ud835\udc61) . As such, Jonsson et al. (2008) define vulnerability appropriately for our work as the magnitude of damage given the occurrence of a particular disruptive event. Networks have been characterized in two broad categories with respect to how their vulnerability is analyzed (Mishkovski et al. 2011): (i) those that involve \u201cstructural robustness,\u201d or how networks behave after the removal of a set of nodes or links based only on topological features, and (ii) those that involve \u201cdynamic robustness,\u201d or how networks behave after the removal of a set of nodes or links given load redistribution leading to potential cascading failures. With respect to Figure 1, in networks that are primarily described by structural robustness (e.g., inland waterway, railway), \ud835\udc61e and \ud835\udc61d would coincide with each other such that network performance drops immediately as disruption \ud835\udc52\ud835\udc57 occurs. The performance of networks exhibiting dynamic robustness would dissipate over time after a disruption due to cascading effects (e.g., electric power networks), such that \ud835\udc61d is subsequent to \ud835\udc61e . This paper focuses on networks described by structural robustness. Emphasis is placed on vulnerability in the larger context of network resilience. Resilience is defined here as the time dependent ratio of recovery over loss, noting the notation for resilience, \u042f (Whitson and Ramirez-Marquez 2009) as R is commonly reserved for reliability. Similar in concept to the resilience triangle, we make use of the resilience paradigm provided in Figure 1, and we quantify resilience with Eq. (1) (Pant et al. 2014, Baroud et al. 2014, Barker et al. 2013, Henry and Ramirez-Marquez 2012). \ud835\udf11(\ud835\udc610) is the \u201cas-planned\u201d performance level of the network, \ud835\udc61d is the point in time after the disruption where network performance is at its most disrupted level, and recovery of the network occurs between times \ud835\udc61s and \ud835\udc61f. \u042f\ud835\udf11(\ud835\udc61|\ud835\udc52\ud835\udc57) =\ud835\udf11(\ud835\udc61|\ud835\udc52\ud835\udc57)\u2212\ud835\udf11(\ud835\udc61d|\ud835\udc52\ud835\udc57)\ud835\udf11(\ud835\udc610)\u2212\ud835\udf11(\ud835\udc61d|\ud835\udc52\ud835\udc57) (1) 1. QUANTIFYING NETWORK VULNERABILITY A common approach to quantifying network vulnerability is with graph invariants (e.g., connectivity, diameter, betweenness centrality) as deterministic measures (Boesch et al. 2009). We focus on tangible metrics of network behavior in the form of a flow-based service function, \ud835\udf11(\ud835\udc61) , rather than graph theoretic measures of performance. For this work, we choose all node pairs average maximum flow for \ud835\udf11, calculated by finding the maximum flow from a source node \ud835\udc60 to a sink node \ud835\udc61, then exhausting all (\ud835\udc60, \ud835\udc61) pairs across the network and averaging the maximum flow for each (\ud835\udc60, \ud835\udc61) pair. This work considers geographic based physical networks with capacitated and symmetric arcs. Examples include transportation networks in which traffic per hour on a roadway or bridges with weight restrictions constrain traffic flow. We consider a class of disruptive events that impair the capacity of one or more edges in the network. To prioritize preemptive efforts to reduce network-wide vulnerability, we develop a variety of edge-specific, flow-based metrics to identify the most important edges. Edges deemed as the most important can be reinforced or otherwise protected prior to any event to reduce network vulnerability or can be candidates for expedited recovery (though we focus on the vulnerability, and not recoverability, aspect of network resilience in this work). In this section we provide details concerning various candidate edge importance measures relating to network vulnerability. 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 3 1.1. Notation Let \ud835\udc3a = (\ud835\udc49, \ud835\udc38) denote a directed graph where \ud835\udc49 is a set of \ud835\udc5b vertices (also called nodes) and \ud835\udc38 \u2286 \ud835\udc49 \u00d7 \ud835\udc49 is a set of \ud835\udc5a directed edges (also called arcs or links). For (\ud835\udc56, \ud835\udc57) \u2208 \ud835\udc38 , the initial vertex \ud835\udc56 is called the tail and the terminal vertex \ud835\udc57 is called the head. Let \ud835\udc50\ud835\udc56\ud835\udc57 and \ud835\udc65\ud835\udc56\ud835\udc57 denote the capacity and flow on edge (\ud835\udc56, \ud835\udc57) \u2208 \ud835\udc38 , respectively. A directed path P from a source node \ud835\udc60 to a target node \ud835\udc61 is a finite, alternating sequence of vertices and one or more edges starting at node \ud835\udc60 and ending at node \ud835\udc61 , \ud835\udc43 ={\ud835\udc60, (\ud835\udc60, \ud835\udc631), \ud835\udc631, (\ud835\udc631, \ud835\udc632), \ud835\udc632, \u2026 , (\ud835\udc63\ud835\udc58, \ud835\udc61), \ud835\udc61} where all of the odd elements are distinct nodes in V and the even elements are directed edges in E. All nodes other than \ud835\udc60 and \ud835\udc61 are referred to as internal nodes. The length of path \ud835\udc43 is the number of edges it contains. The maximum capacity of a path is equal to the minimum capacity of all edges in the path. That is, the max capacity of path P equals min(\ud835\udc56,\ud835\udc57)\u2208\ud835\udc43 \ud835\udc50\ud835\udc56\ud835\udc57. The \ud835\udc60-\ud835\udc61 max flow problem utilizes a subset of all possible paths between \ud835\udc60 and \ud835\udc61 to route a maximum amount of a commodity from \ud835\udc60 to \ud835\udc61 without exceeding the capacity of any edge. 1.2. Proposed Importance Measures Several importance measures for components of graphs have previously been offered. A frequent theme in these measures is the notion of centrality [Anthonisse 1971, Freeman 1977]. Edge betweenness, for example, of (\ud835\udc56, \ud835\udc57)\ud835\udf16\ud835\udc38 is a function of the number of shortest paths between nodes \ud835\udc60 and \ud835\udc61 which include edge (\ud835\udc56, \ud835\udc57). The edge betweenness centrality of (\ud835\udc56, \ud835\udc57) is the sum of its edge betweenness for all \ud835\udc60 - \ud835\udc61 pairs. Newman [2004] introduced a modified edge centrality that does not restrict the metric to only shortest paths between \ud835\udc60 and \ud835\udc61 but stochastically includes other paths. In our work we introduce or otherwise consider several flow-based and topological measures relating to max flow paths within a graph. 1.2.1. All Pairs Max Flow Edge Count The first importance measure is inspired by the basic edge betweenness centrality concept. However instead of shortest paths, we consider max flow paths. The all pairs max flow edge count is the total number of times a given edge is utilized in all \ud835\udc60-\ud835\udc61 pairs max flow problems. The intuition is that if an edge is used more often than others in contributing to maximum flow, then a disruption that impacts its capacity is likely to have a significant impact on network performance \ud835\udf11. Let \ud835\udf07\ud835\udc60\ud835\udc61(\ud835\udc56, \ud835\udc57) = 1 if edge (\ud835\udc56, \ud835\udc57) is used in a given \ud835\udc60 - \ud835\udc61 max flow problem and 0 otherwise. We define the first candidate for edge importance based on the raw max flow edge tally divided by the total number of \ud835\udc60 - \ud835\udc61 pairs, as shown in Eq. (2). If multiple paths share a minimally capacitated edge, there will be multiple paths that contribute the same value to a given \ud835\udc60 - \ud835\udc61 max flow problem. We arbitrarily choose among the shortest of these otherwise equally capacitated paths. \ud835\udc3c(\ud835\udc56,\ud835\udc57)MF count = 1\ud835\udc5b(\ud835\udc5b\u22121)\u2211 \ud835\udf07\ud835\udc60\ud835\udc61(\ud835\udc56, \ud835\udc57)\ud835\udc60,\ud835\udc61\u2208\ud835\udc49 (2) 1.2.2. Min Cutset Count An \ud835\udc60-\ud835\udc61 cut on a graph is a partitioning of nodes into two disjoint sets \ud835\udc46 and \ud835\udc47 such that \ud835\udc60 \u2208 \ud835\udc46 and \ud835\udc61 \u2208 \ud835\udc47. The \ud835\udc60-\ud835\udc61 cutset is the set of edges which have a tail in \ud835\udc46 but terminate in \ud835\udc47. The capacity of an \ud835\udc60-\ud835\udc61 cut is equal to the sum of the capacity of the \ud835\udc60-\ud835\udc61 cutset. The min cut of a graph is the \ud835\udc60-\ud835\udc61 cut with minimal capacity. According to the max-flow min-cut theorem, the \ud835\udc60-\ud835\udc61 max flow is equal to its min cut. If an edge (\ud835\udc56, \ud835\udc57) is a member of the min cutset for an \ud835\udc60 -\ud835\udc61 pair, then it is a bottleneck for the corresponding max flow problem. Furthermore, if (\ud835\udc56, \ud835\udc57) is damaged and its capacity reduced, then the max flow value is also reduced. The edge importance measure \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset is the total number of times edge (\ud835\udc56, \ud835\udc57) is a member of the min cutset for all \ud835\udc60 - \ud835\udc61 pairs. This is represented arithmetically in Eq. (3), where \ud835\udeff\ud835\udc60\ud835\udc61(\ud835\udc56, \ud835\udc57) = 1 if edge (\ud835\udc56, \ud835\udc57) is a member of the \ud835\udc60-\ud835\udc61 min cutset and 0 otherwise. If multiple 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 4 equivalent minimum cutsets exist, we choose one arbitrarily. \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset = 1\ud835\udc5b(\ud835\udc5b\u22121)\u2211 \ud835\udeff\ud835\udc60\ud835\udc61(\ud835\udc56, \ud835\udc57)\ud835\udc60,\ud835\udc61\u2208\ud835\udc49 (3) 1.2.3. Edge Flow Centrality Another variation from the literature useful for this work is a node centrality measure based on max flow introduced by Freeman [1991]. Freeman\u2019s measure is derived from the total flow passing through node \ud835\udc56 when max flow \ud835\udf14\ud835\udc60\ud835\udc61 is routed from \ud835\udc60 to \ud835\udc61 for all \ud835\udc60, \ud835\udc61 \u2208 \ud835\udc49. A simple revision of the metric provides an importance based on the total volume of flow on an edge. Specifically, the edge flow centrality of (\ud835\udc56, \ud835\udc57)\ud835\udf16\ud835\udc38 is defined as the sum of flow on (\ud835\udc56, \ud835\udc57) for all possible \ud835\udc60-\ud835\udc61 pair max flow problems divided by the sum of all pairs max flows, shown in Eq. (4), where \ud835\udf14\ud835\udc60\ud835\udc61(\ud835\udc56, \ud835\udc57) is the flow on (\ud835\udc56, \ud835\udc57) when the max flow \ud835\udf14\ud835\udc60\ud835\udc61 is routed from \ud835\udc60 to \ud835\udc61. \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow =\u2211 \ud835\udf14\ud835\udc60\ud835\udc61(\ud835\udc56,\ud835\udc57)\ud835\udc60,\ud835\udc61\u2208\ud835\udc49\u2211 \ud835\udf14\ud835\udc60\ud835\udc61\ud835\udc60,\ud835\udc61\u2208\ud835\udc49 (4) 1.2.4. Flow Capacity Rate and Weighted Flow Capacity Rate The flow capacity rate (FCR) quantifies how close a given edge is to becoming a potential bottleneck based on flow amount and capacity. If an edge is significantly underutilized with respect to its capacity, then it is inherently robust to disruptions that reduce capacity. Whereas if \ud835\udf14\ud835\udc60\ud835\udc61(\ud835\udc56, \ud835\udc57) \u2248 \ud835\udc50\ud835\udc56\ud835\udc57 then damage to (\ud835\udc56, \ud835\udc57) is more likely to affect network performance. The edge flow capacity rate is the sum of the percentages of edge flows to edge capacity for all \ud835\udc60-\ud835\udc61 pair max flow problems, shown in Eq. (5). \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR = 1\ud835\udc5b(\ud835\udc5b\u22121)\u2211\ud835\udf14\ud835\udc60\ud835\udc61(\ud835\udc56,\ud835\udc57)\ud835\udc50\ud835\udc56\ud835\udc57\ud835\udc60,\ud835\udc61\u2208\ud835\udc49 (5) An edge with a high flow capacity rate is more likely to become a bottleneck than an edge with a lower value, but the expected impact to the overall network performance should also be a function of the expected contribution of the given edge. A weighted flow capacity rate (WFCR) can be computed by weighting each term in Eq. (5) by the edge flow volume. \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR = 1\ud835\udc5b(\ud835\udc5b\u22121)\u2211 (\ud835\udc3c(\ud835\udc56,\ud835\udc57)flow)\ud835\udf14\ud835\udc60\ud835\udc61(\ud835\udc56,\ud835\udc57)\ud835\udc50\ud835\udc56\ud835\udc57\ud835\udc60,\ud835\udc61\u2208\ud835\udc49 (6) 1.2.5. One-at-a-Time Damage Impact We consider a baseline empirical measure based on a direct computation of the impact to network performance when a given edge is damaged. The one-at-a-time damage impact importance measure is the average percent change across all \ud835\udc60 - \ud835\udc61 max flow problems when (\ud835\udc56, \ud835\udc57) has its capacity reduced by 50%. This is shown in Eq. (7), where \ud835\udf14\ud835\udc60\ud835\udc61,\ud835\udc56\ud835\udc57\u2032 is the max flow from \ud835\udc60 to \ud835\udc61 when the capacity of (\ud835\udc56, \ud835\udc57) is set equal to 0.5\ud835\udc50\ud835\udc56\ud835\udc57. \ud835\udc3c(\ud835\udc56,\ud835\udc57)impact= 1\ud835\udc5b(\ud835\udc5b\u22121)\u2211 \ud835\udf14\ud835\udc60\ud835\udc61\u2212\ud835\udf14\ud835\udc60\ud835\udc61,\ud835\udc56\ud835\udc57\u2032 \ud835\udf14\ud835\udc60\ud835\udc61\ud835\udc60,\ud835\udc61\u2208\ud835\udc49 (7) 2. ILLUSTRATIVE EXAMPLES The network instances in our empirical analysis are generated from a random geometric graph structure with bi-directional and capacitated edges. A bi-directional edge is composed of two symmetric directed arcs. The random geometric graph algorithm randomly positions nodes within a two dimensional area, and edges are added between all nodes that are within a specified distance. To create a network that is connected, after the algorithm terminates, if two or more disconnected component exists, two nodes are selected at random, each from a different disconnected component, and an edge is added between the two nodes. This process continues until the network is connected. All edges are then randomly assigned capacities according to a continuous uniform distribution on [100,1000]. We simulate two sizes of networks, small and large, and a low-density and high-density version of each. The four network instances are depicted in Figure 2. Figure 2a and 2b depict the lower and higher density instances of the smaller network which both contain 20 nodes. The small graph with lower density (SGLD) instance has 20 bi-directional edges. The small graph with higher density (SGHD) contains 53 bi-directional edges. Figure 2c and 2d portray the larger network size which consists of 70 nodes. The low density (LGLD) instance has 128 bi-12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 5 directional edges and the higher density (LGHD) instance contains 791. Lower density graphs have less inherent redundancy and may be more vulnerable to disruptions. (a) SGLD (b) SGHD (c) LGLD (d) LGHD Figure 2: Four network design instances for vulnerability analysis. The simulated disruptions impact capacities of edges depending on the distance from the epicenter of the disruptive event. Four concentric circles of discrete damage levels are centered about the epicenter. The four damage levels reduce edge capacities by 80%, 60%, 40%, and 20%, with the most damage located at the smallest circle at the epicenter. Any edge that intersects one of these disruptive event circles sustains damage, and the damage sustained will be associated with the smallest of the concentric circles intersected (i.e., the largest related damage value). The circles, from smallest to largest, cover 10%, 20%, 30%, and 40% of the network region such that an individual damage level correspond to 10% of the total area. Figure 3 depicts an example of the four damage circles superimposed on one network instance. This approach to disrupting the network is similar to what could be expected with an explosion or possibly an earthquake (depending on the geographical scale). Figure 3: Concentric circles of damage from a simulated disruptive event. We perform 100 independent simulations on all four network instances. In each simulation the epicenter of the disruptive event is randomly located. Network performance \ud835\udf11 is computed at the time of maximum disruption to evaluate the vulnerability of the network without any preventive measures. The approach to disrupting the network will affect vulnerability results. However, as we are simulating several such disruptions, we feel that vulnerability results will be somewhat general. For each candidate importance measure, we employ an associated improvement policy to strengthen certain edges. The top 15% of edges, ranked according to importance, are selected for strengthening. To evaluate the effectiveness of the rankings, we compare the average disrupted network performance across all simulated disruptions for each of the improvement policies as well as a \u201crandom selection\u201d improvement policy, where 15% of edges are randomly selected for hardening. An improved edge does not sustain as much damage as it would otherwise if affected in a disruption. Specifically, an edge that would suffer an 80% reduction in capacity would incur only 40% reduction if hardened; an edge which would sustain 60% damage would only experience 20%. The lower damage impact zones do not affect strengthened edges. Table 1 summarizes the disruptive event simulation scenario. To recover the network we assume a straightforward process in which the capacities of all links are recovered in parallel at 8% of 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 6 their original edge capacity per time unit. Since the most damaged edges must recover 80% of their capacity, there will a total of 10 time steps per simulated disruption. Networks are recovered to their original performance level. Performance will be computed at every time step during the recovery to compare how the strengthening measures impact the recovery curve. Note that the focus of this work is how different edge prioritization policies can impact vulnerability and subsequently resilience. To understand resilience, recovery must be accounted for but is done so in a general way here. Table 1: Disruptive event scenario damage details. Damage circle Total area Edge damage (no hardening) Edge damage (hardening) Circle 1 5% 80% 40% Circle 2 10% 60% 20% Circle 3 15% 40% 0% Circle 4 20% 20% 0% 2.1. Initial Results Some measures displayed moderately positive linear correlations in all four instances (e.g., max flow count and edge flow centrality or flow capacity rate), suggesting that two measures drew similar conclusions as to edge vulnerability. For the purposes of this study, the priority rankings of the importance measures are more important than the values themselves. The Kendall-tau metric quantifies the strength of the monotonic relationship between two importance measures and is reported in Tables 2 through 5. For the large network instances and the higher density instances, the weighted flow capacity rate measure has the strongest association with the impact measure. Two importance measures without a strong correlation provide different dimensions of information about the network. It is possible such orthogonality could be exploited to better inform improvement decisions. Edge flow centrality and cutset count on the high density instances, for example, appear to provide independent information. Table 2: Kendall-tau measure of association for edge importance measures: SGLD instance. \ud835\udc3c(\ud835\udc56,\ud835\udc57)MF count \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset -0.10 - - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow 0.73 -0.15 - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR 0.47 0.40 0.36 - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR 0.19 0.71 0.17 0.58 - \ud835\udc3c(\ud835\udc56,\ud835\udc57)impact 0.02 0.56 0.01 0.46 0.44 Table 3: Kendall-tau measure of association for edge importance measures: SGHD instance. \ud835\udc3c(\ud835\udc56,\ud835\udc57)MF count \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset 0.03 - - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow 0.43 -0.01 - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR 0.56 0.17 0.10 - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR 0.28 0.62 0.38 0.21 - \ud835\udc3c(\ud835\udc56,\ud835\udc57)impact -0.01 0.50 0.20 -0.13 0.57 Table 4: Kendall-tau measure of association for edge importance measures: LGLD instance. \ud835\udc3c(\ud835\udc56,\ud835\udc57)MF count \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset 0.22 - - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow 0.78 0.19 - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR 0.78 0.35 0.64 - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR 0.56 0.63 0.57 0.62 - \ud835\udc3c(\ud835\udc56,\ud835\udc57)impact 0.38 0.41 0.41 0.35 0.51 Table 5: Kendall-tau measure of association for edge importance measures: LGHD instance. \ud835\udc3c(\ud835\udc56,\ud835\udc57)MF count \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR \ud835\udc3c(\ud835\udc56,\ud835\udc57)cutset 0.01 - - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)flow 0.40 -0.03 - - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)FCR 0.74 0.05 0.18 - - \ud835\udc3c(\ud835\udc56,\ud835\udc57)WFCR 0.25 0.54 0.43 0.13 - \ud835\udc3c(\ud835\udc56,\ud835\udc57)impact 0.01 0.60 0.28 -0.09 0.74 The six preparedness policies, as well as \u201cdo nothing\u201d and random policies, are tested for 100 simulated disruptive events. The first policy, the 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 7 \u201cdo nothing\u201d policy referred to as \u201cnone\u201d in the subsequent figures, selects no edges for hardening. The second policy randomly selects 15% of edges to harden. The third through the eighth polices are based on the importance measures max flow count, cutset count, edge flow centrality, flow capacity rate, weighted flow capacity rate, and one-at-a-time impact, respectively. Mean network vulnerability percentages at time \ud835\udc61\ud835\udc51 are listed in Table 6. Maximum vulnerabilities occurring during the most disruptive of the 100 simulated events is also listed. The least vulnerable network performance means and worst-case disruptions are in bold. Table 6: Vulnerability by preparedness policy. SGLD SGHD Policy Mean Max Mean Max None 34.6% 61.2% 30.0% 61.7% Random 32.4% 55.9% 27.2% 56.8% MF Count 29.6% 54.3% 24.3% 52.4% Cutset Count 28.9% 52.7% 24.3% 51.1% Flow Centrality 29.6% 54.3% 24.8% 48.7% FCR 31.3% 60.2% 24.8% 52.7% WFCR 28.9% 52.7% 23.6% 50.4% Impact 29.6% 57.6% 24.2% 51.4% LGLD LGHD Policy Mean Max Mean Max None 27.6% 49.5% 27.6% 49.5% Random 26.0% 47.6% 26.0% 47.6% MF Count 17.7% 33.2% 17.7% 33.2% Cutset Count 18.1% 38.1% 18.1% 38.1% Flow Centrality 16.8% 28.6% 16.8% 28.6% FCR 15.9% 28.8% 15.9% 28.8% WFCR 15.6% 28.9% 15.6% 28.9% Impact 15.6% 29.6% 15.6% 29.6% The network types prove to exhibit different inherent strengths with regard to vulnerability to disruptions. The two larger graphs have more edges and thus a higher likelihood of redundant paths. The worst case vulnerability without edge hardening for the larger graphs is less than 50%, whereas the two smaller instances have maximum vulnerabilities observations exceeding 60% performance drops. The percentage point range of maximum vulnerability varies considerably by instance type: 8.5%, 13.0%, 20.9%, and 5.5% for SGLD, SGHD, LGLD, and LGHD, respectively. The graph type which benefits most from an improvement policy is the LGLD instance, whereas the graph which benefits the least is the LGHD instance. Accordingly the most effective preparedness policies on average (and for the worst case disruptions) change based on network type. The cutset count ranking policy is a top performer in the SGLD instance but the weakest among the competing importance metrics in the LGLD instance. The WFCR importance ranking generates the most consistently effective improvement strategy, yielding the least vulnerable network on average in all four instances. 3. CONCLUSIONS Many previous network disruption studies focus on graph theoretic, topological measures to identify network components that may have an adverse impact on network connectivity if they are disrupted. We focus directly on network performance and develop component importance measures that provide a more tangible representation of how network flows are disrupted. And we address disrupted flow in the broader context of network resilience, combining vulnerability and recoverability. Initial results suggest that adopting a preparedness policy based on the weighted flow capacity rate importance measure results in networks with the least vulnerability across network instances. This measure accounts for (i) the amount of flow across an edge relative to the network max flow as well as (ii) the capacity of the arc, accounting for both criticality to max flow as well as capacity. Perhaps these two dimensions combine to identify edges that produce a robust network. Future work remains in drawing broader conclusions about relationships among the importance measures across a larger variety and larger generation of network instances. Initial 12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 8 results suggest that measures may complement each other in terms of identifying edges for hardening. And this is not surprising, as some of the measures focus more on network topology (e.g., cutset) while others are focused almost entirely on network performance (e.g., max flow edge count). Finally, the damage model used to generate the disruptive event can impact results, and accordingly future work will explore how the importance measures are robust to the damage model and network configuration, particularly in the context of tactical (recovery) and strategic (design) decision making. 4. REFERENCES Aven, T. (2011). \u201cOn Some Recent Definitions and Analysis Frameworks for Risk, Vulnerability, and Resilience.\u201d Risk Analysis, 31(4), 515-522. Barker, K., Ramirez-Marquez, J.E., and Rocco, C.M. (2013). \u201cResilience-Based Network Component Importance Measures.\u201d Reliability Engineering and System Safety, 117(1), 89-97. Baroud, H., Ramirez-Marquez, J.E., Barker, K., and Rocco, C.M. (2014). \u201cStochastic Measures of Network Resilience: Applications to Waterway Commodity Flows.\u201d Risk Analysis, 34(7), 1317-1335. Boesch, F.T., Satyanarayana, A., and Suffel, C.L. (2009). \u201cA Survey of Some Network Reliability Analysis and Synthesis Results.\u201d Networks, 54(2), 99-107. Bruneau, M., Chang, S.E., Eguchi, R.T., Lee, G.C., O\u2019Rourke, T.D., Reinhorn, A.M., Shinozuka, M., Tierney, K., Wallace, W.A., and von Winterfeldt, D. (2003). \u201cA Framework to Quantitatively Assess and Enhance the Seismic Resilience of Communities.\u201d Earthquake Spectra, 19(4), 733-752. Freeman, L.C., Borgatti, S.P., and White, D.R. (1991). \u201cCentrality in Valued Graphs: A Measure of Betweenness Based on Network Flow.\u201d Social Networks, 13(2), 141-154. Haimes, Y.Y. (2006). \u201cOn the Definition of Vulnerabilities in Measuring Risks to Infrastructures.\u201d Risk Analysis, 26(2), 293-296. Haimes, Y.Y. (2009). \u201cOn the Definition of Resilience in Systems.\u201d Risk Analysis, 29(4), 498-501. Henry, D., and Ramirez-Marquez, J.E. (2012). \u201cGeneric Metrics and Quantitative Approaches for System Resilience as a Function of Time.\u201d Reliability Engineering and System Safety, 99, 114-122. Hollnagel, E., Woods, D.D., and Leveson, N. (eds). (2006). Resilience Engineering: Concepts and Precepts. Aldershot, UK: Ashgate Press. Jonsson, H., Johansson, J., and Johansson, H. (2008). \u201cIdentifying Critical Components in Technical Infrastructure Networks.\u201d Journal of Risk and Reliability, 222(2), 235-243. Mishkovski, I., Biey, M., and Kocarev, L. (2011). \u201cVulnerability of Complex Networks.\u201d Communication in Nonlinear Science and Numerical Simulations, 16(1), 341-349. Nagurney, A., and Qiang, Q. 2008. \u201cA Network Efficiency Measure with Application to Critical Infrastructure Networks.\u201d Journal of Global Optimization, 40(1-3), 261-275. Newman, M. (2004). \u201cA Measure of Betweenness Centrality Based on Random Walks. Social Networks, 26(2), 175-188. Ouyang, M., and Duenas-Osoro, L. (2012). \u201cTime-dependent Resilience Assessment and Improvement of Urban Infrastructure Systems.\u201d Chaos, 22(3), 033122. Pant, R., Barker, K., Ramirez-Marquez, J.E., and Rocco, C.M. (2014). \u201cStochastic Measures of Resilience and their Application to Container Terminals.\u201d Computers and Industrial Engineering, 70(1), 183-194. Park, J., Seager, T.P., Rao, P.S.C., Convertino, M., and Linkov, I. (2013). \u201cIntegrating Risk and Resilience Approaches to Catastrophe Management in Engineering Systems.\u201d Risk Analysis, 33(3), 356\u2013367. Whitson, J., and Ramirez-Marquez, J.E. (2009). \u201cResiliency as a Component Importance Measure in Network Reliability.\u201d Reliability Engineering and System Safety, 94(10), 1685-1693. Zhang, C., Ramirez-Marquez, J.E., and Rocco, C.M. (2011). \u201cA New Holistic Method for Reliability Performance Assessment and Critical Components Detection in Complex Networks.\u201d IIE Transactions, 43(9), 661-675. Zio, E., Sansavini, G., Maja, R., and Marchionni, G. (2008). \u201cAn Analytical Approach to the Safety of Road Networks.\u201d International Journal of Reliability, Quality and Safety Engineering, 15(1), 67-76. ","@language":"en"}],"Genre":[{"@value":"Conference Paper","@language":"en"}],"IsShownAt":[{"@value":"10.14288\/1.0076058","@language":"en"}],"Language":[{"@value":"eng","@language":"en"}],"Notes":[{"@value":"This collection contains the proceedings of ICASP12, the 12th International Conference on Applications of Statistics and Probability in Civil Engineering held in Vancouver, Canada on July 12-15, 2015. Abstracts were peer-reviewed and authors of accepted abstracts were invited to submit full papers. Also full papers were peer reviewed. The editor for this collection is Professor Terje Haukaas, Department of Civil Engineering, UBC Vancouver.","@language":"en"}],"PeerReviewStatus":[{"@value":"Unreviewed","@language":"en"}],"Provider":[{"@value":"Vancouver : University of British Columbia Library","@language":"en"}],"Rights":[{"@value":"Attribution-NonCommercial-NoDerivs 2.5 Canada","@language":"en"}],"RightsURI":[{"@value":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/2.5\/ca\/","@language":"en"}],"ScholarlyLevel":[{"@value":"Faculty","@language":"en"}],"Title":[{"@value":"Vulnerability importance measures toward resilience-based network design","@language":"en"}],"Type":[{"@value":"Text","@language":"en"}],"URI":[{"@value":"http:\/\/hdl.handle.net\/2429\/53196","@language":"en"}],"SortDate":[{"@value":"2015-07-31 AD","@language":"en"}],"@id":"doi:10.14288\/1.0076058"}