International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP) (12th : 2015)

Efficient computational models for the optimal representation of correlated regional hazard Christou, Vasileios; Bocchini, Paolo Jul 31, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


53032-Paper_443_Christou.pdf [ 212.02kB ]
JSON: 53032-1.0076227.json
JSON-LD: 53032-1.0076227-ld.json
RDF/XML (Pretty): 53032-1.0076227-rdf.xml
RDF/JSON: 53032-1.0076227-rdf.json
Turtle: 53032-1.0076227-turtle.txt
N-Triples: 53032-1.0076227-rdf-ntriples.txt
Original Record: 53032-1.0076227-source.json
Full Text

Full Text

12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015Efficient Computational Models for the Optimal Representation ofCorrelated Regional HazardVasileios ChristouResearch Assistant, Department of Civil and Environmental Engineering, ATLSSEngineering Research Center, Lehigh University, Bethlehem, PA, USAPaolo BocchiniAssistant Professor, Department of Civil and Environmental Engineering, ATLSSEngineering Research Center, Lehigh University, Bethlehem, PA, USAABSTRACT: In this paper, a methodology is presented for the generation of an optimal set of mapsrepresenting the intensity of a natural disaster over a region. In regional hazard and loss analysis, mapslike these are commonly used to compute the probability of exceeding certain levels of intensity at allsites, while also providing information on the correlation among the intensity at any pair of sites. Theinformation on the spatial correlation between two locations is of utmost importance for the accuratedisaster performance assessment of lifeline components and of distributed systems. However, traditionalhazard maps (such as those provided by USGS) do not provide this essential information, but only theprobability of exceedance of a specific intensity at the various sites, considered individually. Therefore,many researches have attempted to address this problem and incorporate correlation in their models,mainly with two basic approaches. The first approach includes analytic or computational methodologiesto assess directly the correlation; the second approach is adopted by techniques for the selection of a rep-resentative set of intensity maps, ofter referred to as “regional hazard-consistent maps”. The methodologypresented herein, which branches out from the previous two approaches, considers the intensity maps asrandom fields. By adopting this abstract perspective, the new methodology is particularly appropriate fora multi-hazard approach, and it can take advantage of tools for the optimal sampling of multi-dimensionalstochastic functions. These tools ensure that the weighted ensemble of generated samples (i.e., intensitymaps) tends to match all the probabilistic properties of the field, including the correlation. In fact, thesamples generated by the proposed methodology fully capture the marginal hazard at each location andthe correlated regional hazard. After the technique is presented, an application is provided, for the caseof seismic ground motion intensity maps.1. INTRODUCTIONDuring the last decades, the increased attention forsocio-economic impacts of extreme events has ledstructural engineers to slowly shift focus from in-dividual structures to spatially distributed systemsand entire communities. Problems involving net-work reliability, direct structural loss estimation,and lifeline resilience are characterized by large un-certainties and strong correlation among the vari-ous physical quantities and locations. Many haz-ard models for various types of disasters can suc-cessfully capture the uncertainty, but do not con-sider the spatial correlation. However, it has beenshown that underestimating the importance of theregional correlation may introduce significant inac-curacy on the socio-economic effects. For exam-ple, Lee and Kiremidjian (2007) argued that seis-mic risk models that do not consider ground mo-tion and damage correlation underestimate systemrisk and, as a consequence, high-cost economic de-cisions may end up being nonconservative. Simi-larly, Bocchini and Frangopol (2011) showed that1During the last decades, the increased attention forsocio-economic impacts of extreme events has ledstructural engineers to slowly shift focus from in-dividual structures to spatially distributed systemsand entire communities. Problems involving net-work reliability, direct structural loss estimation,and lifeline resilience are characterized by large un-certainties and strong correlation among the vari-ous physical quantities and locations. Many haz-ard models for various types of disasters can suc-12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015the assumption of totally uncorrelated bridge dam-age states in a network reliability analysis leads tolarge nonconservative errors on the network perfor-mance. Moreover, Crowley and Bommer (2006)demonstrated that assuming perfect spatial correla-tion leads to errors on the opposite side and overes-timates the loss exceedance curves when a proba-bilistic seismic hazard analysis (PSHA) is applied.So, it has been established, that the studies onthese problems should be performed at the regionalscale and the models used should take into ac-count accurate information on the spatial correla-tion. This is particularly true for the correlation ofthe intensity measure (IM) representing the severityof a natural disaster at various locations. However,this information is not included in the most popularhazard maps, such as those provided by the UnitedStates Geological Service (USGS) for seismic haz-ard, and the National Oceanic and Atmospheric Ad-ministration (NOAA) for weather-related hazards.Traditional hazard maps provide information onlyon the marginal hazard at each site, treated individ-ually. The techniques used to generate these mapstake into account all the possible events of differentintensity, occurring at different sites and with dif-ferent probability, and integrate them to determinethe probability of exceeding any level of a represen-tative intensity measure (e.g., peak ground acceler-ation for earthquakes; wind speed for hurricanes).The drawback, however, is that the information onthe fact that each event affects concurrently multi-ple locations is not embedded in the maps.Therefore, the scientific community has tried toaddress these issues with two different approaches.The first group of researchers developed techniqueswhich tried to assess the regional correlation in a di-rect fashion. In particular, a subgroup of this familyused analytic techniques (Moghtaderi-Zadeh andKiureghian, 1983; Gardoni et al., 2003), whereassome others exploited computational approaches(Bocchini and Frangopol, 2011). Even though mostof this techniques are elegant and easy to imple-ment, providing closed-form solutions, they usu-ally have to make substantial simplifications andassumptions, which may not be realistic. The sec-ond group of scientists utilizes simulation-basedA B ܲሺܯܫ஺ ൒ ݔሻ ܲሺܯܫ஻ ൒ ݔሻ corrሺܯܫ஺ ǡ ܯܫ஻ሻ ൌǫ Figure 1: Hazard maps provide the marginal hazard atall points, but not their correlation.techniques to adequately correlate the IM at vari-ous sites. This line of research led to the popular“hazard-consistent” techniques for regional inten-sity maps, extensively reviewed by Han and David-son (2012) and Vaziri et al. (2012). The basic ideais to select a reduced set from a large suit of histor-ical and/or synthetic maps, generated by selectedscenario events (e.g., earthquakes). These modelsselect and weight the maps of the reduced set in away to match the hazard at each individual site (pro-vided by USGS for example), as closely as possible(hence the name “hazard-consistent”).This paper proposes to approach the problemwith a different perspective. The first basic idea isto consider the IM of an extreme event over a regionas a two-dimensional random field. Then, an ef-fective technique called “Functional Quantization”(Luschgy and Pagès, 2002; Miranda and Bocchini,2015) is used to generate a small set of maps thatprovide an optimal approximation (in the mean-square sense) of the desired random field. This newapproach provides a very elegant formulation of theproblem and enables a truly multi-hazard paradigm,because it treats in the same way the intensities ofall possible disasters. In fact, the methodology re-quires only to have an appropriate subroutine forthe simulation of IM maps for the considered haz-ard, such as earthquake, flood, or hurricane.2. REPRESENTATIVE IM MAPSFor risk, loss, and resilience analyses at the com-munity level, the problems are very complex andsimulation-based approaches have arisen as themost popular option. Given the nested layers of212th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015uncertainties (e.g., hazard, structural response, re-covery phase) the total probability theorem is usu-ally applied to split the problem in simpler tasks.Therefore, to handle the uncertainty in the hazard,the generation of a set of representative IM maps isthe first step.The selection of an appropriate IM (or a vector ofIM’s) and the generation of maps of its values de-pends on the type of analysis that is performed. Forinstance, in seismology it is customary to sample afew random parameters that describe the character-istics of the earthquake event, propagate the seis-mic effects through the region by means of attenu-ation functions (which could also include randomparameters), and sometimes superimpose randomresiduals to the resulting map, to account for all theother uncertainties that are not explicitly modeled(see, for instance, Jayaram and Baker, 2010). Themost common IM’s are the peak ground accelera-tion, values of the spectral acceleration for certainperiods, or a combination of these. Similarly, forhurricanes the generation of IM maps starts with theidentification of a genesis point, sampling from akernel smoothed probability distribution of the his-torical genesis points. Then, the hurricane track ispropagated over the time steps, usually based onrandom parameters. Finally, the distribution of thewinds is assessed at each time step, using the cur-rent location and trajectory of the hurricane, alongwith another set of deterministic or random param-eters. Simulators that follow this scheme have beendeveloped by Emanuel et al. (2006) and by Vickeryet al. (2000), among others. In this case, a repre-sentative IM could be the maximum wind speed ex-perienced by each site. Similar approaches can befollowed for other types of disasters, such as torna-does or floods. The IM maps generated in this wayare then used as input for the subsequent analyses.As for all probabilistic assessments, Monte Carlosimulation (MCS) can be considered as the bench-mark approach. If a sufficiently large number ofmaps are used, MCS can accurately represent themarginal hazard at each location, as well as thespatial correlation among sites (embedded in theIM maps). However, for practical applications, thenumber of samples to obtain a good probabilisticrepresentation of the marginal hazard is at least inthe order of 104 and to capture the correlation thisnumber may have to increase by at least an order ofmagnitude. Therefore, as for many applications, thedrawback of MCS is its high computational cost,which makes it impractical in most cases.Han and Davidson (2012) have reviewed a familyof methodologies that address explicitly this issue.They aim at carefully selecting a set of historical orsynthetic IM maps and then applying appropriateprobabilistic weights, so that the regional hazard iscorrectly represented even using a small number ofsamples (“hazard-consistent” methods).The most prominent methodologies of this groupare based on importance sampling, k-mean cluster-ing, and optimization. Importance sampling is usedto partition the large range of the hazard model pa-rameters in a more controlled way. For example,the magnitude of the event is preferably sampledat higher values (Kiremidjian et al., 2007). Thismethodology improves the accuracy of a standardMCS, but in many cases is applied only to some ofthe parameters, ignoring other relevant ones (e.g.,the source of the event).K-mean clustering, as well as other clusteringtechniques, is used to group the intensity maps inclusters (Jayaram and Baker, 2010). In this case,each map is assigned to the cluster with the clos-est centroid map, according to the Euclidean norm.Within each iteration, the centroid of each cluster isrecalculated as the mean of all the sample maps. Fi-nally, the iterative scheme stops when no more reas-signments take place and a random map is selectedfrom each cluster. This overall approach presentsmany similarities with the technique that is pro-posed herein and will be presented in Section 3.However, the proposed technique has been devel-oped starting from a completely different perspec-tive, and its outcome has been proved to be optimalin the mean square sense (ideal for hazard analy-sis), which is not the case for previously developedtechniques.A third set of methodologies is based on proba-bilistic optimization. This group of techniques triesto minimize the error between the marginal hazardyielded by a set of selected IM maps and the “ex-312th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015act” values (e.g., those provided by USGS) for a se-lected grid of points in a region. These optimizationmodels have been applied to different hazards, suchas earthquakes (Vaziri et al., 2012) and hurricanes(Apivatanagul et al., 2011).Overall, these methods yield good results interms of capturing the marginal hazard with a lowercomputational cost than MCS. However, they donot address explicitly the way in which the spatialcorrelation is modeled, which should be the mainmotivation for the development of most of thesemethodologies. The implicit assumption is that theuse of real or realistic realization of the IM maps,which individually carry a spatial correlation, willautomatically transfer a good representation of thecorrelation to the weighted ensemble. Instead, thetechnique proposed in the next section addressesthis point explicitly, and not only optimizes the rep-resentation of the marginal hazard, but also of thethe spatial correlation.3. HAZARD QUANTIZATION METHODThe Hazard Quantization (HQ) method divergesfrom the techniques previously described and ap-proaches the regional hazard problem from anotherperspective. The key difference is that it embracesthe nature of IM maps as random fields, which wasonly sporadically hinted at, in the previous litera-ture (see, for instance, Jayaram and Baker, 2009).This approach yields several benefits. First, it al-lows to take direct advantage of several method-ologies that have already been developed for theenhanced representation of generic random fields,and which are backed up by proofs of optimality.Second, it allows a more elegant treatment of thevarious quantities involved in the problem. An ex-ample of this is the fact that for the case of earth-quakes, there is no distinction between the randomparameters that define the earthquake source (oftencalled “scenario parameters”) and those who modelthe inter- and intra-event variability (often referredto as “residuals”). HQ considers all parameters inthe same way, without the need of a hierarchy andspecialized simulation techniques for each group ofthem (which can be included, but do not need tobe). This in turn yields the third advantage of HQ.Its general perspective makes it a perfect paradigmfor multi-hazard analysis. All potential causes ofdisasters can be addressed in the same way, with aconsistent and uniform framework, where the onlysubroutine that is hazard-specific is the one for thegeneration of individual maps, as described in Sec-tion 2.3.1. Theoretical foundationFunctional Quantization (FQ) is a technique forthe optimal representation of random functionswith a small number of samples (Luschgy andPagès, 2002). FQ has strong similarities withother techniques that share the same goal, such asthe Stochastic Reduced Order Models (Grigoriu,2009). What characterizes FQ is its optimality cri-terion, which is the mean square convergence ofthe approximate representation to the actual ran-dom function. This makes it particularly appropri-ate for hazard analysis in general, and regional IMmaps in particular, where convergence is sought onboth marginal distribution and correlation.To take advantage of FQ, the IM map is con-sidered a stochastic function F , which is a bimea-surable random field on a given probability space(Ω,F ,P) and is defined as:F(x,ω) : Ξ×Ω→ R (1)where Ξ is the (multi-dimensional) space domainand Ω is the sample space.On the other hand, the random function FN ,which approximates F , is defined by the followingequation:FN(x,ω) =N∑i=1fi(x) ·1Ωi(ω) (2)where the deterministic functions fi(x) are called“quanta” and 1Ωi is the indicator function associ-ated with event Ωi ⊂Ω1Ωi(ω) ={1, if ω ∈Ωi0, otherwise (3)Theoretically, almost all the probability space Ωis partitioned into a mutually exclusive and collec-tively exhaustive set {Ωi}Ni=1 and each event Ωi hasan associate probability P(Ωi) and a quantum fi,412th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015representative of all the sample functions associatedwith ω’s which belong to the event Ωi.The same concept can be also visualized from an-other perspective. This is the space of square inte-grable functions L2(Ξ) where the realizations of Fand FN lie. From this perspective, the L2(Ξ) spaceis tasseled into {Vi}Ni=1, where each tassel Vi col-lects all the realizations F(ω) with ω ∈ Ωi. Eachpoint f (x) in the Hilbert space L2(Ξ) can be asso-ciated with its pre-image in the probability space,hence by extension a tassel Vi matches an event Ωi.Thus, when FQ is utilized, operating in boththe Hilbert and probability spaces, a tessellation{Vi}Ni=1 and a corresponding partition {Ωi}Ni=1 areinduced. Next, the probability P(Ωi) associatedwith each event must be computed. However,thanks to the mentioned relationship between thetwo spaces, it is possible to compute instead theprobability PF(Vi) = pi of the corresponding tas-sel Vi, which is equal to the associated P(Ωi) (Mi-randa and Bocchini, 2015). The set of pairs includ-ing the quanta fi(x) and associated probabilities piis called “quantizer”, and it can be used as input forthe weighted simulation of the uncertain problem.From a practical point of view, there are sev-eral techniques to compute the quantizer of a cer-tain random function. In particular, one tech-nique is able to generate quantizers also for non-Gaussian, non-stationary, multi-dimensional ran-dom fields, which is what is needed for HQ.This technique is called “Functional Quantizationby Infinite-Dimensional Centroidal Voronoi Tes-sellation” (FQ-IDCVT) and it extends the idea ofVoronoi Tessellation (VT). VT is a technique forthe partitioning of a finite-dimensional Euclideanspace Rn into regions {Ti}Ni=1, called “Voronoi tas-sels”. Each tassel is a n–dimensional convex poly-hedron with a generating point yˇi ∈ Rn and is de-fined as:Ti ={y ∈ Rn | ‖y− yˇi‖< ‖y− yˇ j‖for j = 1,2, . . . ,N; j = i} (4)where ‖ · ‖ is the Euclidean norm. According toEquation (4), all the points y ∈ Rn that belong totassel Ti are closer to the generating point yˇi than toany other generating point yˇ j =i. A special case ofVT is the Centroidal Voronoi Tessellation (CVT),where each generating point yˇi is also the centroidof tassel Vi. A CVT can be computed using Lloyd’sMethod (Ju et al., 2002).FQ-IDCVT extends Lloyd’s Method to theinfinite dimensional Hilbert space of squared-integrable functions L2(Ξ) (for more details on themathematical derivations, see Miranda and Boc-chini, 2015). In the infinite-dimensional space, tas-sels are defined as follows:Ti ={F(ω) ∈ L2(Ξ) |‖F(ω)− fˇi ‖L2(Ξ) < ‖F(ω)− fˇ j ‖L2(Ξ)for j = 1,2, . . . ,N; j = i}(5)where fˇi is the generating point of tassel Ti and‖ · ‖L2(Ξ) is the L2(Ξ) norm. Equation 5 denotesthat all the realizations F(ω) closer to fˇi than to anyother fˇ j =i are clustered in Ti. Note that the tasselsgenerated by the CVT in Equation 5 will be used asthe Vi in the FQ sense. In other words, Ti ≡Vi.The fˇi’s are determined by the iterative algorithmin Figure 2 (Miranda and Bocchini, 2013) until con-vergence is met in terms of the following error met-ric named “distortion”:Δ({Vi, fi}Ni=1)=N∑i=1∫Vi‖F(ω)− fi‖2L2(Ξ)dPF (6)Miranda and Bocchini (2015) proved that the mini-mization of the distortion as defined in Equation (6)ensures that a CVT of L2(Ξ) is obtained and it is op-timal according to the mean square criterion. Notethat the argument of the norm in Equation (6) im-poses convergence of the approximate representa-tion to the random field, not focusing only on thefirst moment or the marginal distribution.FQ-IDCVT has been shown to work particularlywell against the curse of dimensionality that arisesin these problems and has been demonstrated forGaussian and non-Gaussian random fields (Chris-tou et al., 2014; Bocchini et al., 2014). Therefore,it is used in the proposed methodology for the simu-lation of the strongly non-Gaussian, non-stationary,two-dimensional field representing the IM distribu-tion over a region.512th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 20153.2. Computational algorithmFigure 2 shows the flowchart of the FQ-IDCVT al-gorithm, consisting of four blocks, which are de-tailed in the following. The first block includesthe required input data. These are: (i) probabilis-tic characteristics of the stochastic parameters re-quired to generate an IM map; (ii) parameter N thatis the number of sample IM maps that will be used,called “quantizer size”, which depends on the com-putational resources; (iii) computational parameterNsim, which is usually in the order of Nsim = 100 ·N;(iv) N sample IM maps, which are used as the initialset of quanta.The second module consists of the quanta identi-fication, and it iterates the following tasks:• generation of Nsim sample intensity maps;• computation of the L2(Ξ) distance of IM maprealization j from all the quanta { fi}Ni=1 in the2D space;• clustering of each realization j to the tassel m,where fm is the quanta with the smallest L2(Ξ)distance from j;• averaging of samples in each tassel Vi and up-dating of the respective generating point fi.The third block assesses the probabilistic weightsassociated with the quanta, and it performs the fol-lowing four steps:• generation of Npsim new sample IM maps,where usually Npsim = 10 ·Nsim• computation of the L2(Ξ) distances and clus-tering, as done in the previous block;• assessment of the probability P(Ωi) asP(Ωi) = pi =NiNpsim(7)where Ni is the number of maps in cluster i.The last block represents the output quantizer,which is the representative small set of IM mapsfi (i.e., quanta) and the associated weights pi.The optimal partition of the sample space Ω pro-vided by FQ-IDCVT has been proven to be opti-mal and practically unaffected by the initial selec-tion of quanta (Miranda and Bocchini, 2015). Thealgorithm is easy to implement and in the follow-ing numerical example it is shown that the resultingensemble of IM maps approximates very accuratelythe exact marginal hazard and regional correlation.1) INPUT • Define probabilistic characteristics of intensity map  • Define ܰ, ௦ܰ௜௠, ܰ݌௦௜௠  • Generate initial quanta of ܨ ݔଵǡ ݔଶ  2) QUANTA IDENTIFICATION Compute distances of realizations ݆ from all quanta ௜݂ ௜ୀଵே  in the 2D space Cluster realization ݆ to the tassel ݉ when ௠݂ is the quanta with the smallest distance from ݆ Average samples in ௜ܸ and then update the generating point ௜݂  LOOP CONTINUES UNTIL CONVERGENCE CRITERION IS SATISFIED 3) PROBABILITY IDENTIFICATION Generate ܰ݌௦௜௠ ب ௦ܰ௜௠ realizations Compute distances of realizations ݆ from all quanta ௜݂ ௜ୀଵே  in the 2D space Cluster realization ݆ to the tassel ݉ when ௠݂ is the quanta with the smallest distance from ݆ Compute probability masses Զ ȳ௜ ௜ୀଵே  4) OUTPUT QUANTIZER QUANTA PROBA - Generate ௦ܰ௜௠ realizations BILITIES Figure 2: Flowchart of the FQ-IDCVT algorithm.4. APPLICATIONFor the demonstration of the proposed methodol-ogy, a simplified representation of an earthquakeground motion is considered. Figure 3 shows theregion of interest with two predefined faults. Themagnitude of the earthquake and the hypocenterdepth are assumed to have a triangular distribu-tion with minimum, mode, and maximum equal to[5.5,6,6.5] and [2.0,4.0,6.0]km, respectively. Thefault type is considered strike slip and the fault rup-ture length is determined according to the modeladopted by HAZUS-MH (DHS, 2003). In termsof attenuation, the empirical regression model pre-sented by Abrahamson and Silva (1997) is utilizedfor the generation of ground shaking maps, whichare in terms of spectral acceleration at a period T .The case study is analyzed using HQ and thecharacteristics of the resulting quantizer are com-pared to the exact values of the autocorrelationand the marginal probability of exceeding a certain612th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015km 0 10 20 N A B C D Figure 3: Specified faults AB and CD in the region ofinterest. The blue crosses represent the sample epicen-ters.value of Sa(T = 0.1s) computed by extensive MCS.The FQ-IDCVT parameters considered herein areN = 500, Nsim = 100 ·N and Npsim = 1,000 ·N.Figure 4 shows the marginal hazard P[Sa(T =0.1s) > 0.4g]. The probabilities obtained by HQare in close agreement with the exact. Comparablygood results have been obtained also for the othervalues of spectral acceleration threshold and periodT , even at the tails of the marginal distribution.The autocorrelation of the quantizer has beendetermined for different 1D stripes of the randomfield, to be able to plot it (the complete autocorrela-tion is a 4D function). An example is shown in Fig-ure 5a. Figure 5b illustrates the difference betweenthe ensemble autocorrelation of the quantizer andthe exact. The error on this second-order statistic,which is notoriously difficult to capture, is consid-erably small, in the order of 0.1%.5. CONCLUSIONSA new methodology is presented for the generationof an optimal set of maps representing the inten-sity of a natural disaster over a region. The pro-posed approach is rooted in the idea of consider-ing explicitly the IM maps of any hazard as a two-dimensional random field. Adopting this perspec-tive, an advanced tool named FQ-IDCVT is usedfor the optimal sampling of these two-dimensionalrandom functions. For highly correlated randomfields, such as IM maps for any type of hazard,FQ-IDCVT ensures that the weighted ensemble ofthe samples tends to match particularly well all theproperties of the field, including the correlation.Figure 4: Comparison on the probability of exceedanceP[Sa(T = 0.1s) > 0.4g] between the exact (thick col-ored lines) and the result obtained from HQ withN = 500 (thin black lines).6. REFERENCESAbrahamson, N., A. and Silva, W. J. (1997). “Empiri-cal response spectral attenuation relations for shallowcrustal earthquakes.” Seismological Research Letters,68(1), 94 – 127.Apivatanagul, P., Davidson, R., Blanton, B., and Noz-ick, L. (2011). “Long-term regional hurricane hazardanalysis for wind and storm surge.” Coastal Engineer-ing, 58(6), 499 – 509.Bocchini, P. and Frangopol, D. M. (2011). “A stochasticcomputational framework for the joint transportationnetwork fragility analysis and traffic flow distributionunder extreme events.” Probabilistic Engineering Me-chanics, 26(2), 182–193.Bocchini, P., Miranda, M. J., and Christou, V. (2014).“Functional quantization of probabilistic life-cycleperformance models.” Life-Cycle of Structural Sys-tems: Design, Assessment, Maintenance and Man-agement (H. Furuta, D. M. Frangopol, M. Akiyamaeds.), Tokyo, Japan, Taylor and Francis, 816–823.Christou, V., Bocchini, P., and Miranda, M. J. (2014).“Optimal representation of multi-dimensional ran-dom fields with a moderate number of samples.” Pro-ceedings of CSM7 (Deodatis and Spanos eds.), San-torini, Greece, 1–12.Crowley, H. and Bommer, J., J. (2006). “Modelling seis-mic hazard in earthquake loss models with spatiallydistributed exposure.” Bulletin of Earthquake Engi-neering, 4, 249–273.712th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12Vancouver, Canada, July 12-15, 2015a) b) Figure 5: Approximation of the autocorrelation function of Sa(T = 0.1) with N = 500 quantizers (a) and differ-ence between approximated autocorrelation and exact (b).DHS (2003). HAZUS-MH MR4 Earthquake ModelTechnical Manual. Department of Homeland Se-curity; Emergency Preparedness and Response Di-rectorate; Federal Emergency Management Agency;Mitigation Division. Whashington, D.C.Emanuel, K., Ravela, S., Vivant, E., and Risi, C. (2006).“A statistical deterministic approach to hurricane riskassessment.” Bulletin of the American MeteorologicalSociety, 87(3), 299–314.Gardoni, P., Mosalam, K., and Der Kiureghian, A.(2003). “Probabilistic seismic demand models andfragility estimates for rc bridges.” Journal of Earth-quake Engineering, 7, 79–106.Grigoriu, M. (2009). “Reduced order models for ran-dom functions. Application to stochastic problems.”Applied Mathematical Modelling, 33(1), 161–175.Han, Y. and Davidson, R. A. (2012). “Probabilistic seis-mic hazard analysis for spatially distributed infras-tructure.” Earthquake Engng. Struct. Dyn, 41, 2141–2158.Jayaram, N. and Baker, J. W. (2009). “Correlation modelfor spatially distributed ground-motion intensities.”1687–1708.Jayaram, N. and Baker, J. W. (2010). “Efficient sam-pling and data reduction techniques for probabilisticseismic lifeline risk assessment.” Earthquake Engng.Struct. Dyn, 39, 1109–1131.Ju, L., Du, Q., and Gunzburger, M. (2002). “Probabilis-tic methods for centroidal Voronoi tessellations andtheir parallel implementations.” Parallel Computing,28(10), 1477–1500.Kiremidjian, S., A., Stergiou, E., and Lee, R. (2007).“Issues in seismic risk assessment of transportationnetworks.” Earthquake Geotechnical Engineering, K.Pitilakis, D., ed., Springer Netherlands, 461–480.Lee, R. G. and Kiremidjian, A. S. (2007). “Uncer-tainty and correlation in seismic risk assessment oftransportation systems.” Report No. 2007/05, PacificEarthquake Engineering Research Center (PEER).Luschgy, H. and Pagès, G. (2002). “Functional quanti-zation of Gaussian processes.” Journal of FunctionalAnalysis, 196(2), 486–531.Miranda, M. J. and Bocchini, P. (2013). “Functionalquantization of stationary gaussian and non-gaussianrandom processes.” Safety, Reliability, Risk and Life-Cycle Performance of Structures and Infrastructures(G. Deodatis, B.R. Ellingwood, D. M. Frangopoleds.), Columbia University, New York, NY, CRCPress, Taylor and Francis Group, 2785–2792.Miranda, M. J. and Bocchini, P. (2015). “A versatiletechnique for the optimal approximation of randomprocesses by functional quantization.” Under review.Moghtaderi-Zadeh, M. and Kiureghian, A. D. (1983).“Reliability upgrading of lifeline networks for post-earthquake serviceability.” Earthquake Engng. Struct.Dyn, 11, 557–566.Vaziri, P., Davidson, R., Apivatanagul, P., and Nozick, L.(2012). “Identification of optimization-based proba-bilistic earthquake scenarios for regional loss estima-tion.” J. of Earthquake Engineering, 16, 296–315.Vickery, P. J., Skerlj, P. F., Steckley, A. C., and Twis-dale, L. A. (2000). “Hurricane wind field model foruse in hurricane simulations.” Journal of StructuralEngineering, 126(10), 1203–1221.8


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items