Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Risk-based design of horizontal curves with restricted sight distance Ibrahim, Shewkar El-Bassiouni 2011-12-31

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
ubc_spring_ibrahim_shewkar.pdf [ 1.18MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0063047.json
JSON-LD: 1.0063047+ld.json
RDF/XML (Pretty): 1.0063047.xml
RDF/JSON: 1.0063047+rdf.json
Turtle: 1.0063047+rdf-turtle.txt
N-Triples: 1.0063047+rdf-ntriples.txt
Original Record: 1.0063047 +original-record.json
Full Text
1.0063047.txt
Citation
1.0063047.ris

Full Text

  RISK-BASED DESIGN OF HORIZONTAL CURVES  WITH RESTRICTED SIGHT DISTANCE   by   Shewkar El-Bassiouni Ibrahim   B.Sc., United Arab Emirates University, 2008     A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF   MASTER OF APPLIED SCIENCE  in  The Faculty of Graduate Studies   (Civil Engineering)      THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)     March 2011    © Shewkar El-Bassiouni Ibrahim, 2011 ii  Abstract Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge on the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the degree of deviation from design standards. In reliability analysis, this risk is represented by the probability of non-compliance (Pnc) defined as the probability that the supply exceeds the demand. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this thesis attempts to incorporate a reliability-based quantitative risk measure in the development of Safety Performance Functions (SPFs).  The thesis considers the design of horizontal curves, where non-compliance occurs whenever the available sight distance (ASD; supply) falls short of the stopping sight distance (SSD; demand). The inputs of SSD are random variables and appropriate probability distributions were assumed for each. A comprehensive database for the Trans-Canada Highway was used to compute the probability of non-compliance (Pnc) for 100 segments of horizontal curves. Several Negative Binomial (NB) Safety Performance Functions (SPFs) were developed and the predicted collisions were found to increase with risk (Pnc) and that the rate of increase varies by severity level. The likelihood ratio test showed that the inclusion of a risk parameter (Pnc) has generated better predictive models that have significantly outperformed the traditional models. Further, a iii  spatial analysis was carried out which showed that the spatial models were successful in overcoming potential model misspecification resulting from incorporating only exposure and Pnc in the SPFs as relevant covariates might have been omitted. The optimization of cross-section design to minimize the risk associated with restricted sight distance was also considered using a multiple objective function that involves new Collision Modification Factors (CMFs) incorporating Pnc. The results indicated that accounting for the random variations due to drivers’ behavior proactively at the design stage would decrease collisions in addition to achieving an overall risk reduction.    iv  Table of Contents Abstract ........................................................................................................................................... ii Table of Contents ........................................................................................................................... iv List of Tables ................................................................................................................................. ix List of Figures ................................................................................................................................. x Acknowledgements ........................................................................................................................ xi Dedication ..................................................................................................................................... xii 1 Introduction ............................................................................................................................. 1 1.1 Engineering Approaches to the Road Safety Problem ..................................................... 2 1.2 Current Level of Safety in Design Guidelines ................................................................. 4 1.3 Using Reliability Analysis to Account for Design Uncertainty ....................................... 6 1.4 Problem Statement and Research Objectives ................................................................... 8 1.5 Thesis Structure ................................................................................................................ 9 2 Literature Review.................................................................................................................. 11 2.1 Reliability Analysis ........................................................................................................ 11 v  2.1.1 Development of Reliability Analysis ...................................................................... 13 2.1.2 Reliability Theory ................................................................................................... 14 2.1.3 Component Reliability Problem ............................................................................. 16 2.1.4 Reliability Methods ................................................................................................. 20 2.1.4.1 Synthetic Method (Monte-Carlo Simulation) .................................................. 21 2.1.4.2 Analytic Methods (Taylor-based) .................................................................... 22 2.1.5 Reliability Analysis in Road Design ....................................................................... 29 2.2 Safety Performance Functions (SPFs)............................................................................ 36 2.2.1 Regression Models .................................................................................................. 38 2.2.1.1 Poisson Regression Model ............................................................................... 38 2.2.1.2 Poisson-Gamma Model ................................................................................... 39 2.2.1.3 Poisson-Lognormal Model .............................................................................. 40 2.2.1.4 Enhanced Regression Models .......................................................................... 41 2.2.2 Development of SPFs ............................................................................................. 42 2.2.3 Parameter Estimation Methods ............................................................................... 43 vi  2.2.4 SPFs in Road Design .............................................................................................. 44 2.3 Summary ........................................................................................................................ 47 3 Developing SPFs Incorporating Risk Measures ................................................................... 49 3.1 Background .................................................................................................................... 49 3.2 Limit-State Function ...................................................................................................... 49 3.2.1 Available Sight Distance......................................................................................... 50 3.2.2 Stopping Sight Distance .......................................................................................... 52 3.3 Data Description ............................................................................................................. 53 3.4 Data Distributions .......................................................................................................... 54 3.5 Quantifying the Risk of Design Non-Compliance ......................................................... 56 3.6 Safety Performance Functions Incorporating Pnc ........................................................... 59 3.7 Summary ........................................................................................................................ 65 4 Spatial Analysis and Pnc ........................................................................................................ 66 4.1 Background .................................................................................................................... 66 4.2 Spatial Poisson Models .................................................................................................. 69 vii  4.2.1 Conditional Auto-Regressive (CAR) Models ......................................................... 70 4.2.2 Multiple Membership (MM) Models ...................................................................... 71 4.2.3 Extended Multiple Membership (EMM) Models ................................................... 72 4.3 Data Description ............................................................................................................. 72 4.4 Full Bayes Methodology ................................................................................................ 73 4.4.1 Prior Distributions ................................................................................................... 73 4.4.2 Posterior Distributions ............................................................................................ 75 4.5 Results and Discussion ................................................................................................... 75 4.6 Summary ........................................................................................................................ 79 5 Cross-Section Risk Optimization .......................................................................................... 81 5.1 Background .................................................................................................................... 81 5.2 Cross-Section Risk Optimization using New CMFs for Restricted Sight Distance on Horizontal Curves ..................................................................................................................... 85 5.2.1 Objective Function .................................................................................................. 86 5.2.2 Optimization Algorithm .......................................................................................... 88 5.2.3 Cross-Section Optimization Results ....................................................................... 89 viii  5.2.4 Discussion ............................................................................................................... 93 5.3 Summary ........................................................................................................................ 95 6 Summary, Conclusions, Contributions and Future Research ............................................... 96 6.1 Summary and Conclusions ............................................................................................. 96 6.2 Research Contributions .................................................................................................. 98 6.3 Future Research .............................................................................................................. 99 Bibliography ............................................................................................................................... 101 ix  List of Tables Table ‎3.1 Statistical summary of data set (100 horizontal curves) ............................................. 54 Table ‎3.2 The probability distributions for the random input parameters .................................. 54 Table ‎3.3 Prediction models for operating speed ....................................................................... 55 Table ‎3.4 Estimates, standard errors (SE) and t-ratios for NB models incorporating Pnc .......... 61 Table ‎3.5 Goodness-of-fit tests for NB models incorporating Pnc ............................................. 63 Table ‎3.6 Estimates, standard errors (SE) and t-ratios for NB models without Pnc ................... 64 Table ‎3.7 Testing NB models incorporating Pnc vs. models without Pnc ................................... 64 Table ‎4.1 Statistical summary of the entire dataset for spatial analysis of segments ................. 73 Table ‎4.2 Estimates and standard errors (SE) for PLN .............................................................. 77 Table ‎4.3 Estimates and standard errors (SE) for EMM ............................................................ 78    x  List of Figures Figure ‎2.1 Domain of definition and failure limit state of a reliability analysis model ............... 16 Figure ‎2.2 The reliability index .................................................................................................... 18 Figure ‎2.3 The probability of non-compliance ............................................................................ 19 Figure ‎2.4 The relationship between Pnc and   ........................................................................... 19 Figure ‎2.5 Methods to solve limit state function ......................................................................... 20 Figure ‎2.6 Design point found using FORM ............................................................................... 26 Figure ‎2.7 Difference between FORM and SORM ..................................................................... 29 Figure ‎3.1 Lateral clearance for a cross-section .......................................................................... 52 Figure ‎3.2 The cumulative distribution of horizontal curves by Pnc ............................................ 56 Figure ‎3.3 A scatter plot of Pnc vs. radius .................................................................................... 57 Figure ‎3.4 A scatter plot of Pnc vs. operating Speed .................................................................... 58 Figure ‎3.5 Monte-carlo simulation results ................................................................................... 59 Figure ‎3.6 The relationship between predicted collisions and Pnc ............................................... 62 Figure 5.1 Typical cross-section in a RHS horizontal curve ....................................................... 82 xi  Acknowledgements I would like to extend my gratitude to my thesis supervisor, Prof. Tarek Sayed at the University of British Columbia. His continuous support, encouragement, patience and financial support have made this an enjoyable academic experience. His feedback from our discussions has inspired me to continue giving my best effort and have helped me recognize my true potential. I owe sincere thanks to my professors at the undergraduate level (at the United Arab Emirates University) and at the graduate level (University of British Columbia) for all of their help, advice and support throughout my years of studying. They have enriched my knowledgebase and their love of their topic is what motivated me to start my career in academia so that one day I would be able to inspire new generations.  Special thanks are in order for Dr. Mohamed Elesawey for reading a first draft of this thesis and for his continuous encouragement. I would like to thank my fellow Transportation Engineering colleagues at BITSAFS who have been readily available to assist me in my moments of need.  I am forever grateful to my parents and brother whose support has helped me both morally and financially throughout my years of education. Their prayers, help, love and presence in my life have always helped me aspire to become a better person and achieve greater things in life. I would also like to thank my lovely friend and sister-in-law, Nahla Sherif, for her continuous encouragement and her uplifting attitude throughout our years together.  xii  Dedication                       To my loving parents and brother This thesis is dedicated to my father, Prof. Mohamed Yahia El-Bassiouni, who taught me that the best kind of knowledge is that which is learned for its own sake. It is dedicated to my mother, Esmat Fawzi, whose incredible support taught me that even the largest of tasks can be completed if tackled one step at a time. It is also dedicated to my brother, Dr. Karim El-Basyouny, who has opened up my eyes to other more detail-oriented skills without which I would not have been able to succeed.    1  1 Introduction There has been a considerable increase in the number of vehicles worldwide as a result of the increase in world population and economic activity. Consequently, an increase in the frequency and severity of collisions became an epidemic in the developing and developed countries. Each year, road collisions result in 1.3 million fatalities and 50 million non-fatal collisions worldwide (WHO, 2004). The Canadian Transportation Safety Board released a report detailing the consequence of road collisions in Canada. Each year witnesses a total of 160,000 road collisions with a total of 2,900 fatalities.  Between 1991 and 2000, road collisions were ranked as the third leading cause of death in the United States, trailing closely after cancer and heart disease. In Canada, collisions pose a serious health and safety issue. Collisions are the leading cause of death in Canadian Children (American Academy of Pediatrics, 2002, Howard, 2002) and on a list of leading causes of Potential Years of Life Lost, collisions rank seventh (National Cancer Institute of Canada, 2001). Currently, road-related collisions worldwide rank as the tenth leading cause of death accounting for more than 2.1% of all deaths. Unless drastic measures are undertaken, road collisions are predicted to climb to the eighth most common cause of death by 2030 (Mathers and Loncar, 2005). The frequency and severity of collisions is undoubtedly a substantial issue and consume massive financial resources. Collisions pose an economic burden not only on the victims and their families but also on societies and governments as well. Each year, road collisions cost countries 2  up to 4% of their Gross National Product (WHO, 2004). In the US, collision costs are estimated at a staggering US$164 billion (Cambridge Systematics Inc., 2008). In Canada, vehicle collisions cost CAD$62.7 billion on an annual basis incurring costs related to property damage, hospital care, traffic delay, and emergency response (Vodden et al., 2007).  The previous statistics further solidify reasons to mitigate this “rising outbreak” of roadway collisions. These mitigation measures could include developing and implementing innovative techniques to find plausible and feasible solutions to lower these values. Therefore it is not surprising that road safety is currently one of the main areas of focus for governments/states and researchers.  1.1 Engineering Approaches to the Road Safety Problem The road system is represented by three different components: the driver, the vehicle and the road (Sayed et al. 1995). Therefore in order to determine the safety of a road segment, the first step usually includes relating collisions to failure in one or more combinations of each of the three components. Studies have shown that driver error contributes approximately 90-95% of all collisions on the road. The first logical instinct would be to direct the safety initiatives at the driver which should lead to significant reduction in the frequency and/or the severity of collisions. Moreover, while driver behavior does indeed directly influence the occurrence of a collision, there are other underlying factors which indirectly contribute to collision occurrence. Such factors include, but not limited to, the design and layout of the road, vehicle characteristics, and the surrounding driving environment, among other causes. 3  In light of these conclusions, road authorities have focused on improving the safety of the roads. This was achieved by implementing road-based safety initiatives that are broadly categorized into two approaches: the reactive approach and the proactive approach (de Leur and Sayed, 2003). The reactive approach targets existing road facilities whenever a high number of collisions is observed. Although it provides solutions to improve safety, this method requires a significant number of collisions to be recorded before any action is undertaken. Thus leaving road authorities waiting until road collisions claim a high number of fatalities and injuries. An application of this approach is traditional road safety improvement programs which include the identification, diagnosis and remedy of collision prone locations (otherwise known as black spots).  Predominantly, the practice of road safety was limited to carrying out safety analysis and relying on road-side installations. For years, this reactive approach has been adopted by researchers to conduct studies and find solutions to road safety problems. However, a new direction of research is the “proactive” approach targeting road safety problems before they occur. In contrast to the reactive approach, the proactive approach attempts to prevent unsafe road conditions by implementing modifications and changes at the planning and design stages.  The success of the reactive and proactive approaches in reducing collision occurrences hinges upon the existence of consistent methods that provide reliable estimates of road safety (Sawalha and Sayed, 2006). There are currently several methods developed to provide reliable estimates of road safety, with Safety Performance Functions (SPFs) being the most prevalent among analysts. SPFs are mathematical models with inherent statistical characteristics which attempt to find 4  relationships between collision counts and a number of roadway characteristics such as traffic volume, horizontal/vertical alignment, etc.  1.2 Current Level of Safety in Design Guidelines  Design guidelines were introduced in response to the increased demand for roads due to the rapid growth in motorization and road usage. Several manuals currently exist to facilitate the design of new infrastructures and they are followed worldwide. In the US, the American Association of State Highway and Transportation Officials (AASHTO, 2004) publishes manuals and books; The Federal Highway Administration (FHWA) publishes the Manual on Uniform Traffic Control Devices (MUTCD, 2009). In Canada, the Transportation Association of Canada (TAC, 1999) publishes design guidelines to promote safe, efficient and environmentally sustainable transportation services. The manuals are continuously updated to reflect the recent advances in safety research, geometric design guidelines and standards. The Transportation Research Board (TRB) has established many standing committees that are primarily responsible for evaluating and improving these design manuals.  The design concepts are transparent enabling designers to be easily and quickly trained. Moreover, having a unified code supports consistency and ensures uniformity in road building across jurisdiction (Zheng, 1997). Although these design guidelines have been very useful, there are two general concerns:  (1) roads built in accordance to design standards are assumed to be 5  safe (Hauer, 1999) which is not necessarily valid; and (2) safety is not explicitly included as a design parameter.  The basis of the former concern is the assumption that the safety of a highway is intrinsically based in its design (McGee et al. 1995). Ideally, highway safety can be maximized by applying the highest geometric design standards. Limited resources and constraints due to physical, right-of-way and environmental features often restrict the highway designer’s ability to develop geometric designs that exceed minimum design standards. Such limits and constraints thus force designers to make critical design decisions that may deviate from these standards. In these cases, the present guidelines provide little opportunity for designers to deviate from the standards although it may be justifiable (Crowel, 1989).  The implication of deviating from the design standards on the overall safety is not known. Safety has not been universally identified by a certain parameter or variable. Different researchers measure safety by different means. Consequently, the design guidelines embody safety as a by-product of other measures. This is where the second concern materializes. Even if designers were to abide by the design standards, this may not guarantee an improvement in safety of the designed roadway facility. The overall outcome of the design process is highly influenced by including many factors. Not all of these factors are included directly at the design stage which leads to uncertainties in design variables and models.  6  1.3 Using Reliability Analysis to Account for Design Uncertainty  Several sources of uncertainty with varying degrees exist in different design phases. Melchers (1999) identifies various potential sources (e.g. decisional, prediction, physical, human) of uncertainty in civil engineering. Decision uncertainty occurs when engineers use their judgment and experience to overcome a design problem without adhering to the guidelines. Prediction uncertainties are those based on the adequacy of state-of-the-knowledge tools used by engineers.  Physical uncertainty arises from the inherent variation of the design parameters which could be reduced by providing additional information. Human uncertainty accounts for drivers’ errors committed while they are on the road.  One main source of uncertainty in the design guidelines (e.g. AASHTO, TAC) is that they were developed based on a combination of empirical research, professional experience and judgment. In order to account of this uncertainty as well as uncertainty in the design variables and models, the common approach is to use conservative percentile values for uncertain design inputs. The selection of the percentile values is not based on definitive safety measures; and the safety margin of the design output is generally unknown (Ismail and Sayed, 2009).  These percentile values culminate to provide deterministic standards for design requirements. These deterministic standards characterize the attributes of the road user population by single values. The basic assumption is that all users will drive in the same way, which may not be the case. Knowledge about design inputs (i.e., design speed), model parameters (i.e., perception and brake reaction time), and model form (i.e., calculation of design elements like sight distance) is 7  imperfect. Failure to account for the uncertainty associated with these parameters and inputs is likely to lead to non-optimal design.   To account for safety, uncertainties in design variables need to be accounted for. Reliability analysis has been recently advocated as a robust approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design feature. The main type of uncertainty addressed by reliability analysis is physical uncertainty.  The use of reliability analysis in geometric design allows designers to investigate the effect of each individual geometric element on the overall design. Its importance lies in (Haukass, 2007): (i) ability to rank the input parameters according to their relative importance to the overall model, which allows targeting the important components to improve the performance of the model, and (ii) using the probability of non-compliance (Pnc), an outcome of reliability analysis, as a nominal value for comparison purposes and in code calibraion applications. The relative reliability of alternate design solutions can be compared to facilitate the decision making process between different options. Many studies have advocated the use of reliability analysis in geometric road design. However, one main factor that has been inhibiting a wider application of reliability analysis in highway design is the lack of an established relationship between reliability measures and an objective measure of safety such as collision frequency. As such, the nature of the relationship between reliability measures and safety needs to be addressed.  8  1.4 Problem Statement and Research Objectives This thesis provides a method that incorporates both the reactive approach to safety, by developing SPFs involving reliability-based risk measures, and the proactive approach, by using the ensuing risk measures at the design stage to improve safety. Although the thesis focuses on the design of horizontal curves and their implications on sight distance, the proposed methodology could be applied to any other design feature.  For the design of horizontal curves with restricted sight distance, non-compliance occurs whenever the available sight distance (ASD; supply) falls short of the stopping sight distance (SSD; demand). Thus, reducing the probability of non-compliance (Pnc; an outcome of reliability analysis) is crucial to road users’ safety. There are many variables that determine SSD. Although these variables are mostly subject to inherent random variations, current design practices treat them as deterministic.  The first objective of the thesis is to quantify the risk associated with the uncertainty in the design inputs by using appropriate probability distributions to represent them in the reliability analysis. Another objective of the thesis is to establish and investigate the relationship between collision frequency and the probability of non-compliance (Pnc). Establishing this relationship leads to (i) the admission of reliability-based design into traditional benefit-cost analysis, and (ii) wider applications of reliability analysis in road design. The final objective is to optimize horizontal curves cross-section design using the new Collision Modification Factors (CMFs) incorporating Pnc. The proposed optimization provides designers 9  with a proactive approach to the design of cross-section elements in order to (i) minimize the risk associated with restricted sight distance, (ii) balance the risk across the two carriageways of the highway, and (iii) reduce the expected collision frequency. 1.5 Thesis Structure There are six chapters which summarize the content and work of this thesis; together they provide a full view on how reliability analysis can be used to improve safety, relate it to collisions, and calibrate design guidelines. Chapter two summarizes the literature review of reliability theory and the development of safety performance functions (SPFs).  Chapter three discusses the sight distance model and explains how an outcome of reliability analysis such as Pnc is incorporated into SPFs to quantify the safety risk and investigate its effect on collisions. A comprehensive database comprising geometric design features, collisions and traffic volume data for the Trans-Canada Highway (referred to as Highway 1) is used. The data were collected for 100 segments of horizontal curves some of which had a limited sight distance due to the presence of median barriers or side concrete barriers on the road. The First Order Reliability Method (FORM) was used to compute the probability of non-compliance (Pnc) for the 100 road segments. Several Negative Binomial (NB) SPFs were developed to investigate the effects of Pnc on predicted collisions by severity level (Injuries and Fatalities, I+F; Property Damage Only, PDO).  10  Since the magnitude of omitted variables’ bias is likely to affect the Pnc impact on collisions, spatial analysis is explored in Chapter four to assess the consequences of such bias. The new data included segments that were further clustered according to similar traffic volumes. The Conditional Auto-Regressive (CAR) and Extended Multiple Membership (EMM) model were developed in a Full Bayes (FB) context via the Markov Chain Monte Carlo (MCMC) simulation techniques using uninformative priors. Two sets of models without and with spatial effects were developed and compared.  An optimization method for cross-section dimensions, where Pnc as well as Collision Modification Factors (CMFs) are targeted as potential means of minimizing risk and collisions on highways is described in Chapter five. A Sequential Quadratic Programming (SQP) algorithm was used to carry out the optimization, for various case studies of horizontal curves that are parts of two highway developments in British Columbia. This Pnc-based proactive approach is proposed to show that incorporating reliability-based risk measures in SPFs may well improve safety. The thesis comes to a conclusion with Chapter six, which summarizes the thesis outcomes, results, discussions and provides suggestions for future research.    11  2 Literature Review This chapter presents an overview of subject areas related to risk-based geometric design. The objective is to provide a comprehensive review of two main topics: reliability analysis and safety performance functions. This is prelude to how those two areas will be combined to provide a general framework of risk-based design.  2.1 Reliability Analysis Design guidelines provide the basic approach for engineers to design roads. However, in some situations engineers will need to make decisions which may require them to deviate from the standards. The risk associated with deviating from the standards is unaccounted for in current guidelines. Managing this risk is a matter of choice on how to allocate available resources to best accommodate a tradeoff between cost and safety (Faber 2006).   Risk management includes analyzing, assessing and making decisions regarding the risk associated with a specific activity or a given hazard. This process includes considering all uncertainties in the current problem as well as examining all possible consequences. Computational and analytical models have been developed to enhance the ability to accurately predict outcomes of possible solutions. These models are typically developed in a deterministic framework and ignore the inherent uncertainties that are present in the model parameters and in the analysis procedures (Haukaas, 2007). Therefore, developing probabilistic methods are becoming increasingly important as they are more informative.  12  Probabilistic analysis is the art of formulating a mathematical model which answers the following question: “what is the probability that a structure will behave in a certain way given that one or more if its properties or geometric dimensions are of a random nature?” (Ditlevsen and Madsen, 2007). This method acknowledges that not all information regarding a geometric feature is known and instead of ignoring the inherent variation; it provides designers with a powerful tool which can enhance their decision-making process.   An application of probabilistic analysis is reliability analysis. The principles used in reliability analysis follow the limit states design approach frequently used by structural engineers. In this approach, rather than representing the variables in the design equations as single values, which is the norm in current design guidelines, they are treated as random variables having probability distributions.  In limit state design, when the demand exceeds the supply, the system is considered to have failed or not complied with the design parameters. Reliability theory can be used to develop factors of safety that incorporate the uncertainty of the supply and demand variables in the analysis.  The resulting factor of safety is termed the probability of non-compliance (   ), which is the probability that the demand will exceed the supply or that a specific design would not meet standard requirements (Richl and Sayed, 2006). The next subsections explain the reliability theory and the concept of probability of non-compliance which will be dealt with in depth throughout the remaining chapters. 13  2.1.1 Development of Reliability Analysis The true sense of reliability conveys “the concept of dependability, successful operation or performance, and the absence of failures” (Blischke and Murthy, 2000). Reliability of a system is the probability that the system will perform its intended function for a specified period of time without any failure when operating under normal conditions.  Reliability analysis can either be qualitative (verifying the failure modes and causes that contribute to the “failure” of a system) or quantitative (using real failure data and mathematical models to produce quantitative estimates of the system’s reliability). The reliability theory incorporates the interdisciplinary use of: probability, statistics, stochastic modeling, engineering insights into design, and scientific understanding of the failure mechanisms (Blischke and Murthy, 2000). The beginning of the twentieth century marked the first applications of statistical techniques to study the reliability of railroad equipment (Nelson, 1982). The shift in focus from deterministic to probabilistic was first introduced by Mayer (1926) in Germany. In the late 1930s, value theory was used to model the fatigue and lifecycle of materials which opened up a new area of research related to probabilistic modeling. This new area was then further developed in following years worldwide. In 1950s and 1960s reliability engineering first blossomed in the US to respond to a need for more reliability equipment for military and space programs (Nelson, 1982; O’Conner, 2002).  14  The use of probabilistic tools in structural design was developed in the 1980’s (Ang and Tang, 1975; Ellingwood et. al 1980). Over time, reliability analysis was developed to the point that it could be included in structural design codes. Although reliability analysis has been extensively utilized in other fields (e.g. structural and geotechnical engineering), it is not as widely used in transportation engineering as it is in other disciplines. The following subsections explain the theory behind reliability analysis and the applications of reliability analysis in transportation engineering.  2.1.2 Reliability Theory Reliability analysis assesses the system’s ability to accommodate the demand of a specific design element against its capacity (Sarhan and Hassan, 2008). The basic reliability problem is a component problem with two random variables, supply and demand. The performance function in the plane represented by these two variables leads to failure or non-compliance (the latter term will be used as it is commonly used in the field of Transportation Engineering) when the demand exceeds the supply.  A generalized model representing the performance function is shown in Equation (2.1)                                                    (2.1) where g  = performance function (otherwise referred to as limit state function), S and D denote supply and demand, respectively, with non-compliance occurring when g < 0, 15  Xi =  a combination or supply and demand variables explaining the reliability problem. The outcomes of the reliability analysis are the reliability index  (shown in Equation 2.2) and the probability of non-compliance, Pnc (shown in Equation 2.3)           (2.2) where μg and σg are the mean and standard deviation of the performance function, respectively,                                       (2.3) where fx is the joint probability density function (PDF) for x1,x2...xn, and the integration is carried out over the failure or “non-compliance” domain (g < 0). The failure domain is shown in Figure 2.1. Thus, the reliability function, which is the complement of the probability of non-compliance, can be defined by:                         (2.4) where FT(t) is the cumulative distribution function (CDF). 16  Figure 2.1 Domain of definition and failure limit state of a reliability analysis model         Source: Ditlevsen and Madsen (2007) 2.1.3 Component Reliability Problem To simplify the notation, Equation (2.1) is re-formulated as         (2.5) The simplest measure of safety is the central factor of safety which is the ratio of the average supply to the average demand given by the following equation                (2.6) A more common measure of safety is the conventional factor of safety where the average demand is increased by some multiple of the standard deviation for demand whereas the average S D Safe State g>0  S>D Limit State g=0  S=D Failure State g<0  S<D 17  supply is decreased by some multiple of the standard deviation for supply.  By using this measure, designers are implying that there is a level of uncertainty in the values for supply and demand.  To be conservative the supply is decreased and the demand is increased.  In Equation (2.7) k is some multiple of the standard deviation                             (2.7) Reliability theory can be used to develop factors of safety that incorporate this uncertainty of the supply and demand variables.  The resulting factor of safety (the reliability index, β) increases in value as the supply exceeds the demand.  The performance function (2.5) can be used to calculate the probability of non-compliance and the reliability index β. Ang and Tang (1975) described a process that can be used to derive the expected value and variance of a design parameter.  This process can also be used to derive the measure of safety, MS, given in Equation (2.8)                 (2.8) where E(S) and E(D) are the expected values for S and D. Since              , it can be shown that β is defined as noted in Equation (2.9)               (2.9) Graphical representation of   is shown in Figure 2.2. 18   Figure 2.2 The reliability index  Source: Richl (2003) The probability of non-compliance is an estimate of the chance that an engineering system (a highway in this case) fails to perform its stated purpose under anticipated operational conditions (see Figure 2.3). The probability of non-compliance can be calculated as follows             (2.10) where   denotes the standard normal cumulative distribution function. Figure 2.4 displays the inverse relationship between     and  .  DSDSDSDSf 19  Figure 2.3 The probability of non-compliance  Source: Ismail and Sayed (2009)  Figure 2.4 The relationship between Pnc and    00.10.20.30.40.50.60.70.80.91-6 -4 -2 0 2 4 6Probability of Non-Compliance, PncReliability Index, β20  2.1.4 Reliability Methods  The reliability methods proposed in the literature are categorized into two families according to whether the random variables are treated with tools of probability theory or those of statistics (Hurtado, 2004). Figure 2.5 shows a breakdown of those methods.  In simple two variable systems, exact methods have been developed to solve for the probability of non-compliance and . However, exact methods to solve reliability equations are not used when there are more than two variables in the performance function, when the performance function is non-linear and when the variables are not normally distributed (Ellingwood et. al 1980). These problems are solved using simulations or approximate methods.   Figure 2.5 Methods to solve limit state function  Source: Hurtado (2004)   Methods of AnalysisSynthetic (Monte-Carlo)DirectSubstitutionAnalytic (Taylor-based)FOSMFORMSORM21  2.1.4.1 Synthetic Method (Monte-Carlo Simulation) In order to estimate the probability of non-compliance, Equation (2.3) can be adapted to:                                         (2.11) The difference here lies in the integration domain; at first the domain was the sample space of vector X for which g(x) ≤ 0. For Monte-Carlo simulation, the integration is carried over the entire sample space of X where          , which is an indicator function = 1, if       , = 0 otherwise. Thus, for N sample realizations of vector X (  , i=1, 2, ... N), Pnc is computed by Equation (2.12)                        (2.12) A large number of realization of the basic random variables X are generated; the simulations which resulted in an outcome with a negative limit-state function (      ) are counted (   . After N simulations, the probability of non-compliance is estimated through           . As N  ∞, the estimate of the probability of non-compliance becomes exact. However, the main drawback of using simulation is that they are computationally expensive since they are time consuming. If a Monte-Carlo simulation is carried out to estimate a probability in the order of  10-6, 108 simulations are expected to be necessary to achieve an estimate with a coefficient of variation in the order of 10% (Faber, 2006).  In the event that the sampling domain is located in a region far away from that of the indicator function; the success rate in the simulations is low. To overcome this problem, variance 22  reduction techniques such as “importance sampling” were proposed to reduce the variance of the probability estimate. Importance sampling utilizes information about the domain of the probability integral. It attempts to center the simulations about the point in the sample space that contributes the most to the probability of non-compliance. In this method, if the limit state function is not “too non-linear” (Faber, 2006) about the design point, the success rate of the simulations will be increased to 50%. However, if no prior information is known regarding the design point and its relative distance to the limit state function (i.e.,   is too large); sampling is not the most suitable method to be used. Monte-Carlo simulation methods are ideally used when the limit state function is associated with difficulties such as when the limit state function is not differentiable or when there is more than one design point at which non-compliance occurs (Faber, 2006).   2.1.4.2  Analytic Methods (Taylor-based) Analytic methods such as First Order Second Moment (FOSM), First Order Reliability Method (FORM) and Second Order Reliability Method (SORM) require that the input vector of random variables be defined by: (1) its joint density function, (2) an approximation method or (3) being transformed into a set of independent variables in the standard normal space (Hurtado 2004).  This transformation eliminates the correlation among samples and as such improves the performance of the overall method when applying statistical learning tools. The various methods by which the transformation is carried out are dependent on the information available regarding the random variables. They include: 23  Normal Variables If the random variables are independent and normally distributed, the transformation is simply a standardization from vector X to U in the standard normal space. Equation (2.13) shows the standardization process:                 (2.13) where,     and    are the mean and standard deviation, respectively, of the variable   .   Normal Translation If the random variables are non-normal but are uncorrelated (i.e., independent); the normal translation is applied as shown in Equation (2.14)                               (2.14) where,         is the probability distribution of the variable   . Rosenblatt Transformation If the joint probability density function of all variables is known; then Rosenblatt Transformation is applied. This method makes use of the conditional probability given by                                           (2.15) 24  where,                  denotes the conditional CDF of    and                  denotes the conditional PDF of   . The Rosenblatt transformation is defined by:                           (2.16) Nataf Transformation If the joint density function is unknown but the marginal and correlation structure are known, the Nataf Transformation is the suitable choice in stochastic mechanics. The vector of input random variables is transformed to vector Z that has zero mean, unit standard deviation and the given correlation matrix.                  (2.17) The set of independent Normal Variables, U, is then obtained through a Choleski or spectral decomposition.  The advantages and disadvantages of the analytic methods FOSM, FORM and SORM are outlined next. FOSM is the most elementary method, introduced by Cornell (1967), and is based on first-order approximations of the mean and standard deviation of the performance function. The transformation of the random variables for this method is carried out based on the Normal Variable method; where the limit-state function is standardized. Equation (2.13) can be modified to:              (2.18) 25  where    , the mean of the limit-state function, is given by          and   , the standard deviation of the limit-state function, is given by                       and     is the correlation coefficient between the supply and demand variables.  However, this method can only be used if the variables and the performance function are linear in nature. Simply representing the performance function in a different form such as                would yield different results than the formulation in Equation (2.7). This is known as the invariance problem and therefore more accurate results can be obtained using other methods.   The more widely used method is FORM which is typically selected over the other methods due to its advantages. It provides more accurate results than FOSM as it overcomes the need to use only variables that are normally distributed. FORM also present more detailed results such as parameter importance.  The basic FORM framework involves several steps: (1) identifying the input distributions for each of the random variables, (2) formulating the reliability problem in terms of the limit-state function, (3) transforming the random variables into uncorrelated standard normal random variables, (4) finding the design point (which is the point on the limit-state surface closest to the origin in the standard normal space) through an iterative process and (5) obtaining the estimates of Pnc and  . This method is shown graphically in Figure 2.6.  At times, there is information available in the literature regarding the probability distributions for each of the random variables. However, that is not always the case and sometimes researchers 26  would have to devise experiments to collect information on the random variables. After assigning probability distributions to each of the random variables, the limit-state function must be formulated. This formulation would depend on the geometric element under consideration and is generally done in terms of the supply and demand.  Figure 2.6 Design point found using FORM  Source: Haukaas (2007) The FORM procedure to determine Pnc and   involves transforming the random variables X into uncorrelated standard normal variables Y. This is important for two reasons: (1) the new standard normal space is dimensionless and so distances can be measured, (2) the probability distribution in the Y-space is the multivariate normal probability distribution. The variables are transformed 27  into the standard normal space by the probability persevering transformation (Haukaas, 2007). The method used for the transformation is dependent on the available information and can be any of the previously described methods. The standard normal space is given by:           (2.19) where Y is the vector of the transformed design variables and F(X) is the CDF of the design inputs X. FORM analysis takes precedence over FOSM in that the first order approximation is not about the mean, but is rather about the design point where g(X) = G(Y) = 0 (2.20) This design point is the solution to the constrained optimization problem                        (2.21) where Y* is the design point, G is the performance function in the standard normal space and “argmin” is the argument of the minimum of a function.  The theoretical background of FORM can be found in a number of different texts on reliability (Ellingwood et. al, 1980; Melchers, 1999). FORM problems can be solved using a number of different commercially available, academic software programs (Melchers, 1999) or using MATLAB subroutines (Haukaas, 2007).  Another widely used reliability method is SORM. It follows the same principle as FORM, however; the failure surface is expanded to the second order. The resulting limit-state surface is 28  represented as a hyper-paraboloid as opposed to a hyper-plane (Haukaas, 2007). This is accomplished by second-order Taylor expansion of the limit-state function about the design point. There are various types of SORM available: curvature-fitted SORM, point-fitted SORM and gradient-based SORM. The curvature-fitted SORM is carried out by describing an analytical expression for the limit state function and differentiating it twice to obtain second-derivatives. Although this method is straightforward, it is computationally expensive. The point-fitted SORM is similar but selects points that are further away from the design point. This method is highly influenced by the paraboloid matrix which must be selected carefully. The gradient-based SORM provides the curvature of the limit-state surface based on the last two trial points in the search for the design point. The second order derivatives of the limit-state function are unnecessary but the algorithm must be carried out several times in order to search for the design point. If the design surface is not too “non-linear” and there is no need warranting the implementation of SORM; FORM is the most suitable approximation approach. The differences between FORM and SORM are shown in Figure 2.7. 29  Figure 2.7 Difference between FORM and SORM  Source: Haukaas (2007) In the current study, FORM was carried out using the Rt software and FERUM in MATLAB. The software requires information regarding the probability distributions of the random variables and the formulation of the performance function. The outcomes are Pnc, β, the design point and the importance vector, which ranks the input parameters according to their relative importance to the overall model. 2.1.5 Reliability Analysis in Road Design Probabilistic methods were first introduced into the area of highway design by Moyer and Berry (1940). They developed a method to determine the safe speed at which vehicles should be 30  traveling on highway curves. The authors used a ball-bard indicator to establish an acceptable “safe speed” on horizontal curves. They identified the percentile values for the operating speed at various design speeds which was the starting point upon which other studies have based their results on. The operating speed was considered to be a random variable and they recommended using the 85th percentile as the operating speed, for a design speed ≤ 30 mph and the 90th percentile for 35 mph.  Navin (1990, 1991) outlined the necessary conditions to use limit state designs or reliability based techniques to understand the random elements of highway safety problems. He provided important arguments in favor of adopting reliability theory in highway geometric design guidelines and discussed several applications of the design of typical highway elements.  Navin (1990) carried out a study to investigate whether safety measures for stopping sight distance, horizontal curves, decision sight distance, passing sight distance and vertical curves could be developed. Margins of safety were calculated under the assumption that the variables were normally distributed and independent. FOSM analysis was used to compute the reliability index for the following geometric features: two horizontal curve radii, upper and lower limits of the Institute of Transportation Engineers passing sight distance and stopping sight distance for the upper and lower speeds at a design speed of 80km/h. The following generic design equation was proposed to address the uncertainty in the design:                         (2.22) where 31     = performance factor (design safety parameter),     = highway supply parameter,    = highway safety importance,    = exposure factor,    = traffic mix,    = driver mix,     = environmental factor,     = terrain factor,    = design standard or construction standard, and       = driver/vehicle demand. The performance factor,  , is chosen so as to ensure that the highway supply parameter,   , is large enough to maintain an acceptable safety margin against driver or vehicle demand,     . Although this method was a first step to incorporate reliability-based methodology in the design process, the method was not adopted in current geometric design standards (Ismail, 2006).    Sight distance requirements were studied by several authors to determine whether a probabilistic tool would outperform the deterministic tools provided by guidelines. Faghri and Demetsky (1988) and Easa (1994) demonstrated the potential of using reliability to evaluate limitations in 32  sight distance at road-railway grade crossings. The objective was to allow adequate sight distance at railroad grade crossings. Faghri and Demetski (1988) compared the performance of their probabilistic model to five other nationally recognized models, they found that their model far exceeded the more “conventional” models used. They viewed it as a valuable predictive tool which can further be improved to cover other applications. Easa (1994) carried out the analysis using FOSM by selecting a specific probability of non-compliance and designing the sight distance on that basis. The results were corroborated using Monte-Carlo simulation and the probabilistic method was found to be accurate.  Easa (1993) presented a probabilistic method to replace the deterministic method of computing intergreen (yellow plus all-red) interval at signalized intersections. The goal was to eliminate the dilemma zone, the zone where a driver is faced with a yellow signal but is unable to stop or clear the intersection safely. Mathematically, this zone is the equivalent of equating the stopping sight distance and intersection clearing distance. The probabilistic method was carried out using FOSM to compute the mean and variance for each random variable (approach speed, perception reaction time, deceleration rate and vehicle length). The author derived a closed-form solution for the intergreen interval. Two values for the probability of non-compliance were chosen and design charts were constructed obtain the intergreen times. The main conclusion of this research is that the probabilistic method is a valuable tool in designing a specific feature (in this case the intergreen interval) at any desired reliability level (reliability index or probability of non-compliance).  33  In an effort to overcome the “extreme values” associated with intersection sight distance available in AASHTO, Tidwell and Humphreys (1970) and Easa (2000) adopted a reliability method based on the mean and variance of probability distributions for each random variable and accounting for correlations amongst those variables. They evaluated the design criteria for traffic signal timing and intersection sight distance using this proposed method as opposed to using single deterministic values supplied by the design guidelines. The probability distributions were obtained from a combination of the literature or assumptions for design values. The probability of non-compliance was computed at varying available sight distances using FOSM analysis. After calibrating the current model in AASHTO, Easa (2000) found that the corresponding reliability levels are high and suggest that the proposed method provides the designer with more flexibility. An alternative model with a reliability level that is “deemed acceptable to the designer” could have been chosen but would not have satisfied the AASHTO requirements.  Reliability techniques have also been applied to analyze operational conditions and sight distance restrictions on horizontal curves (Echaveguren et. al 2005; Richl and Sayed, 2006; Ismail and Sayed, 2010). Echaveguren et al. (2005) proposed a methodology to determine the margin of safety (reliability index) of an existing curve by reliability analysis. The new elements presented in their analysis included: representing driver’s behavior by operating speed, incorporating pavement surface conditions by means of the friction, and, using a probabilistic method to identify the reliability index to estimate the margin of safety. The results revealed that the curve radius, skid resistance, and macro-texture have a high significant impact on the probability of non-compliance as opposed to the superelevation which did not.  34  Richl and Sayed (2006) investigated the effect that narrow medians have on horizontal curves with restricted sight distance. They studied a series of horizontal curves with varying horizontal sight distance restrictions and computed the probability of non-compliance for each case by FORM. The results indicated that a narrow median combined with a tight horizontal curve represented an issue for drivers as they might not be able to stop within the sight distance available to them.  Sight distance restriction is a common safety concern and this is further proved by the research carried out in that area. The main shortcoming with the current design guidelines is the deterministic nature of the design requirements and that the safety implication of deviating from these standards is unknown. Ismail and Sayed (2010) proposed a methodology to evaluate the risk of deviating from design requirements. They measured the risk of horizontal curves with restricted sight distance and devised design aids to assist in measuring the risk of limited sight distance at modified design alternatives. They carried out the analysis on two case studies the results of which showed that the proposed road design has a high risk of limited sight distance. Moreover, the risk levels associated with the design requirements were highly inconsistent. This further strengthens the need to calibrate current guidelines as a step to improve the overall safety of geometric features.  Reliability analysis has been used in conjunction with Monte Carlo simulation to evaluate the sight distance requirements and compute probability of sight distance limitations (El-Khoury and Hobeika, 2007; Sarhan and Yasser, 2008). El-Khoury and Hobeika (2007) studied the effects of incorporating uncertainty into passing sight distance requirements. They devised a Monte-Carlo 35  simulation which obtained the PSD distribution and verified their results using a closed form analytical method. They calibrated the PSD requirements for three different models: AASHTO, enhanced Glennon, and MUTCD. Glennon’s model was found to best depict the various components of a passing maneuver while AASHTO’s model overestimated the PSD requirements.  Sarhan and Hassan (2008) used Monte-Carlo simulation to compute the probability of non-compliance associated with insufficiency of sight distances. Due to the lack of data, the authors used a computer program to develop design parameters to calculate the profiles of sight distances in two and three dimensional projections. They investigated various highway alignments, horizontal curves overlapping with flat grade, crest, and sag curves. When calibrating the standard design values, the authors found that the probability of non-compliance was very conservative.  In subsequent work, Sarhan and Hassan (2009) investigated the effect of vertical curvature on the available sight distance of horizontal curves; they proposed a reliability-based method to calculate the probability of “hazard” (i.e., non-compliance). They used reliability analysis to compute the minimum offset using the probability of non-compliance. Their methodology considers three-dimensional alignments to account for vertical curvature and to study their influence on the sight distance requirement. They devised design aids to demonstrate the applicability of their approach.  36  Ismail and Sayed (2009) proposed a general framework for calibrating geometric design codes to yield outputs with consistent    . In theory, a design output/requirement for a geometric feature should correspond to a probability of non-compliance; however, currently the values of     are inconsistent. They proposed a method to determine a target value for design safety by presenting an application of the calibration framework to the standard design model of crest vertical curves. Evaluating the quality of design of a representative group of existing sites would yield a target/acceptable risk level which can be calculated as the average of specific percentage of the representative group with acceptable and cost-effective safety level.  De Solminihac et al. (2007) noted that the deterministic geometric characteristics are selected based on a uniform behavior of drivers and surface pavement conditions. They devised a methodology which estimates a reliability index using the Hasofer-Lind method to design horizontal curves for low-volume roads by accounting for the variability in the design components. Their method accounts for the variability in skid resistance, pavement texture, driver behavior and geometric design elements.  2.2 Safety Performance Functions (SPFs) Analysts and researchers relate safety of a location (i.e., intersection, segment) to the frequency and severity of collisions occurring on the location. Therefore, mathematical forms were devised to investigate the effect of traffic and geometric characteristics on the frequency of collisions and they were referred to as Collision Prediction Models (CPMs). Since the term CPM might indicate that the model is used only for prediction purposes, which is not always the case, these 37  models have been recently referred to as Safety Performance Functions (SPFs). There are two reasons why these functions are important. For existing facilities, the models provide an estimate of the collisions frequency due to any treatments carried out on it. For planned facilities, it serves as a tool to estimate the predicted collision frequency on a planned facility (Shen, 2007).  SPFs provide estimates of expected collision frequency as a function of traffic volume and roadway geometries. Hadayeghi (2009) presented the following generic form for SPFs:              (2.23) where      = expected number of collisions per a specific unit of time   = vector of coefficients of individual covariates   = matrix of individual covariates Equation (2.23) is used to predict the collision frequency per unit of time as a function of other independent variables or covariates which may include the Average Annual Daily Traffic (AADT), segment length and lane configurations.  The main objective of the analysis is to estimate the vector of coefficients  . Several methods were developed in an effort to obtain these estimates, which depend on the choice of the regression technique. Normal linear regression models were used first, but were heavily criticized by researchers since collisions are discrete, non-negative and rare events which cannot 38  be modeled using linear regression methods (Jovanis and Chang, 1986; Hauer et al., 1988; Miaou and Lum, 1993). Accordingly, generalized linear regression modeling is currently used by researchers and analysts as the state-of-the-practice technique to develop SPFs. The next sections provide an overview of the different types of regression models associated with various probability distributions for collision frequency including the Poisson, Poisson-Gamma and Poisson-Lognormal. 2.2.1 Regression Models 2.2.1.1 Poisson Regression Model It is generally accepted by researchers that the use of a Poisson process to model collisions is the optimum method as it recognizes that collisions are random, discrete, non-negative and sporadic events (Hauer, 1988; Lord et al., 2005). As such, it was the first, away from the normal distribution, used to model collisions. If    denotes the number of collisions at site i (i =1,…,n) it is assumed that collisions at the n sites are independent and that:                      (2.24) Thus, the probability of a site having    collisions is given by                            (2.25)  where    is a Poisson parameter related to site-specific attributes (i.e., exposure, traffic and geometric characteristics) expressed as (Miaou and Lord, 2003) 39                 (2.26) where    is a vector representing the variables having an influence on collisions and   is a vector of the regression coefficients estimated for the data. The unique attribute of a Poisson model is that the mean and variance are equal                   (2.27) The main advantage of using a Poisson model lies in the ease of calculating its error structure. However, Equation (2.27) also represents a limitation. Studies have shown that most accident data tend to be over dispersed (i.e., variance is greater than the mean) therefore making the Poisson less likely to adequately represent the actual collision characteristics (Kulmala and Roine, 1988; Kulmala, 1995; Cameron and Trivedi, 1998; Winkelmann, 2003).  Sources of over-dispersion are attributed to several reasons (Miaou and Lum, 1993): (1) CPM do not generally include all variables explaining the reason behind collision occurrence, (2) presence of uncertainties in vehicle exposure data and traffic variables, and (3) non-homogeneity of roadway environment conditions such as lighting, weather and traffic conditions. To account for the over-dispersion in the collision data, current practice utilizes the Poisson-Gamma or Poisson-Lognormal distributions.  2.2.1.2 Poisson-Gamma Model Researchers have introduced the Poisson-Gamma, which leads to the Negative Binomial (NB) regression model. The NB model is an extension of the Poisson model that accommodates over-dispersion.  40                 (2.28) where              , and         is a multiplicative random effect. The negative binomial model is obtained under the assumption that                       (2.29) where   is the inverse of dispersion parameter. The probability density function of the negative binomial model is given by                                                  (2.30) The mean and variance of NB are given by                                               (2.31) 2.2.1.3 Poisson-Lognormal Model The Poisson-Lognormal (PLN) model is another alternative to the Poisson model and several researchers (Miaou et al., 2003; Lord and Miranda-Moreno, 2008; Aquero-Valverde and Jovanis, 2008; El-Basyouny and Sayed, 2009a, b, c, 2010a, b, c) have advocated this model for its ability to address over-dispersion. Under PLN, the multiplicative random effect is assumed to follow the lognormal distribution 41                                 or                                 (2.32) where     represents the extra Poisson variance. If the dataset contains outliers, the PLN model is a suitable choice for modeling collision occurrence as its tails are asymptotically heavier than those of the Gamma distribution (Kim et al., 2002; Lord and Miranda-Moreno, 2008). The mean and variance of PLN are given by                                                                   (2.33) Unlike NB, the PLN model requires more computation and does not admit a closed form posterior distribution. Therefore, it has not been adopted as frequently as the other models even though it offers more flexibility than the NB model.  2.2.1.4 Enhanced Regression Models There are other various techniques developed to improve the prediction power of SPFs. They are explained below but it is beyond the scope of this research to provide detailed statistical explanation for each of the models.  Zero-inflated Regression Models Collision data can include a high proportion of zero counts which are more prevalent in rural areas. This is problematic when the observed zero counts outnumber the zero counts tolerated by Poisson models. Datasets which include high number of locations with zero collisions are characterized by having a low sample mean (Miranda-Moreno, 2006). Zero-inflated (ZIP) and Zero-inflated Negative Binomial (ZINB) probability models have been developed to circumvent 42  that phenomenon. The ZIP model counts are generated from two sources: (1) zero-state: the proportion of zeros not part of the Poisson distribution, and (2) a usual random process following a Poisson distribution. ZINB is claimed to be more flexible than ZIP as it handles over-dispersion as well as large proportions of zeros. The issue of excessive zero counts is discussed by Lord et al. (2005) and several alternatives were suggested for handling such a problem.  Variable Variance Models Recent work on traffic safety modeling challenged the assumption that the dispersion parameter should be fixed (Heydecker and Wu, 2001; Miaou and Lord, 2003; Miranda-Moreno et al, 2005; El-Basyouny and Sayed, 2006). The approach is an extension of the traditional NB and PLN models where the dispersion parameter (or rather its inverse) is allowed to vary according to some traffic, geometric and/or environmental-related covariates; thereby increasing the flexibility of the model and improving the accuracy of the resulting estimators.  For more information regarding other models including: random parameters, random effects and multivariate models, a more comprehensive review is available in El-Basyouny and Sayed (2011).  2.2.2 Development of SPFs  Most SPFs were developed using NB regression (Kulmala, 1995; Maher and Summersgill, 1996; Hauer, 1997; Sawalha and Sayed, 2001). In addition, procedures for NB model building and outliers’ analysis were developed by Sawalha and Sayed (2006).  43  For road segments the mean   is related to various site-specific traffic, geometric and environmental variables by means of the following link function                                           (2.34) where the segment length (L) is an offset variable; V is the annual average daily traffic (AADT); xj is any of m variables additional to L and V; a0,a1,bj are model parameters. Two measures are usually used to assess the goodness of fit of NB models, these are: the Scaled Deviance (SD) and the Pearson χ2 statistic. McCullagh and Nelder (1998) have shown that the scaled deviance for NB is given by                                            (2.35) The Pearson χ2 is defined as                                    (2.36) The scaled deviance and the Pearson χ2 are asymptotically χ2 distributed with n–p degrees of freedom where p is the number of model’s parameters (Aitkin et al., 1989).  2.2.3 Parameter Estimation Methods There are two common methods used to calibrate the parameters of SPFs, namely: Empirical Bayes (EB) and the full Bayes (FB) approach. The main difference between these two models is the way in which the prior parameters are determined. In the EB approach, the parameters are 44  estimated using the Maximum Likelihood estimation technique. In the FB approach, the parameters are assumed to have hyper-priors giving rise to hierarchical models. If prior information is available, it should be used to formulate informative hyper-priors. If not, vague (i.e., uninformative) hyper-priors are used. Those priors are flat (uniformly distributed) meaning that every possible value of the parameter is equally likely to occur.  For NB models, the parameters μ and κ can be estimated using the Maximum Likelihood method to obtain the EB predicted collisions. The estimates of the parameters of Generalized Linear Models (GLMs) can be obtained via such commercially available statistical software as GLIM (Francis et al., 1993), GENSTAT (Lane et al., 1988) and GENMOD (SAS Institute Inc., 2002-2003). In contrast, hyper-priors               have to be specified for μ and κ under the FB approach in order to obtain the likelihood function                                  . The FB predicted collisions are the posterior means of μ. The posterior means of the parameters can be obtained using readily (free) available software such as WinBUGS (Lunn et al, 2002).  2.2.4 SPFs in Road Design The first models that were developed to investigate the effect of geometric elements on collisions were conventional linear regression models. Several studies (Jovanis and Chang, 1986; Saccomanno and Buyco, 1988; Miao and Lum, 1993) demonstrated that these models were inappropriate and that the inferences drawn from these models were erroneous due to the lack of distributional property to adequately describe the random and discrete vehicle accident events on the road (Miao, 1994; Lee et al. 2005).  45  Consequently, conventional linear regression models were replaced by Possion and NB models to model collision frequency (Maycock and Hall, 1984; Joshua and Garber, 1990; Miaou et al. 1991; Miaou et al., 1992; Shankar et al. 1995; Milton and Mannering 1998). More detailed studies investigated developing zero-inflated probability processes such as ZIP and ZINB regression models (Miaou 1994; Shankar et al. 1997) to account for the possibility of zero-inflated counting processes.  Miaou et al. (1992) investigated the presence of a relationship between truck accidents and highway geometric design by developing a Poisson regression model. The traffic volume, horizontal curvature and vertical grade were found to be significantly correlated with collisions while the shoulder width was less correlated in comparison.  Hadi et al. (1995) developed negative binomial regression models to calibrate safety effects of cross-section design elements on total, fatal and injury collisions. The results showed that, depending on the highway type, increasing lane width, median width, inner shoulder width and/or outer shoulder width are effective in reducing collisions.  They investigated the effect of median type on collisions and found that flushed and unpaved medians were the safest.  Wang et al. (1998) investigated the effects of variables such as traffic volume, functional highway classification, intersection type and cross-section elements (e.g. outer shoulder width, median width/type) on the frequency of collisions on rural, multi-lane and non-freeway roads. A Poisson regression model was developed and the results suggested that predicted collisions 46  increased with increasing exposure measures and number of driveways and intersections. Frequency of collisions decreased with increasing outer shoulder widths and median widths.  Park et al. (2005) attempted to quantify safety effects of geometric design elements for highway facilities. They studied the safety effects of ramp density and horizontal curves on freeways. They used negative binomial regression models to estimate the effect of the individual variables on the crashes and found that the Average Daily Traffic, on-ramp density, degree of curvature, median width, inside shoulder, number of lanes and highway classification to be statistically significant. Using the modeling results, they developed Collision Modification Factors (CMFs) for on-ramp density and horizontal curves for safety prediction on freeways.  Ozbay et al. (2009) compared the safety of various roadway design elements on urban collectors with access. After conducting before and after analysis the authors found that improvements in vertical and horizontal alignment resulted in the highest reduction in the collisions. A study conducted by FHWA (2009) examined the effects of various cross-section related design elements on collision frequency and developed a collision prediction model for rural, multilane and non-freeway highways.  Rengarasu et al. (2009) proposed a new method to address the effect of geometric and cross-section features on the frequency of collisions. Traffic collisions occur as a result of a combination of roadway conditions, driver behavior and the vehicle. The degree to which each factor affects the occurrence of a collision is unknown and instead of including the variables independently in the model, the authors investigated the effect of combining the variables using 47  decision trees. The combination of the models was created using chi-square automatic interaction detection algorithm. Combining the variables together showed a significant effect on the frequency of collisions. Moreover, the effect of road geometry and cross section variables on collisions differed under combinations of the other variables.  Noteworthy is that most of the safety performance functions present in the literature are related to 2D alignments, Easa and You (2009) presented a safety performance function for 3D alignments of two-lane rural highways. They developed five statistical models depending on the combination between horizontal and vertical alignments. For each of those combinations, the authors explored various statistical techniques: Poisson, NB, ZIP, ZINB and validated their data. The authors found the degree of curvature, roadway width (lane and shoulder widths), access density, AADT, and grade value to be the most significant predictors of collisions on horizontal curves. 2.3 Summary In 1972, Lovelace stated that: “the times of straightforward structural design, when the structural engineer could afford to be fully ignorant of probabilistic approaches to analysis are definitely over”. This is a testament to how important and valuable probabilistic methods are at providing designers and analysts a new method to improve safety of roadway segments. This chapter provided an overview of reliability theory, the methods by which reliability analysis could be carried out as well as a review of the studies incorporating reliability analysis in geometric road 48  design. Safety Performance Functions were introduced and an overview of the various regression modeling techniques was presented.   49  3 Developing SPFs Incorporating Risk Measures  The objective of this Chapter is twofold: to propose a method to quantify risk using state-of-the-art reliability analysis and to represent this information in a safety performance function to assess the effect of variability in geometric design features on the safety of roadway segments.  3.1 Background In the past, SPFs and CMFs were developed and used in order to evaluate the safety of a design scenario; however, in some situations it is difficult to develop these models for specific features (i.e., narrow medians). Another shortcoming of SPFs and CMFs is the lack of data available for analysts to isolate the impact of a single design element on collision frequency. In those cases, reliability analysis is used to evaluate the risk associated with a design element (i.e., horizontal curvature). Reliability analysis is not intended to replace SPFs but rather, it complements SPFs by representing risk using a parameter such as Pnc (Richl and Sayed, 2006). This Chapter introduces a methodology by which a measure of reliability is represented in a safety performance function. 3.2 Limit-State Function Horizontal curves of a roadway facility pose a challenge to designers as obstructions can cause drivers to not have a clear line of sight, unlike straight road sections, where drivers have a clear line of sight. The obstruction could be due to natural terrain within the curve (e.g., trees, cliffs) or 50  to infrastructures on the side of the road (e.g., barriers, buildings). If placed too closely to the road, they block the driver’s view of the upcoming road and places drivers at a disadvantage for a certain portion of the road. Some countermeasures to alleviate the risk of limited sight distance include reducing the design speed, preventing overtaking and in some instances changing the alignment of the road to discourage aggressive driver behavior.    Design requirements necessitate that the length of a highway ahead that is visible to a driver should be adequate to recognize an object in the driver’s path and stop before hitting this object. Accordingly, the stopping sight distance is the main focus of the design of horizontal curves. For the present application, the limit state function is defined in terms of g = ASD – SSD    (3.1) where ASD is the Available Sight Distance, SSD is the Stopping Sight Distance and non-compliance occurs when ASD is less than SSD (g < 0).  3.2.1 Available Sight Distance ASD is the portion of the road currently available to the driver. For horizontal curves, the ASD (i.e., the supply variable) is calculated as follows:                                      (3.2) where  R  = horizontal curve radius (m), 51  wlane = lane width (m), and wcleearnace = width of the lateral clearance (m). In this research, all of these variables are considered deterministic with one value of ASD for each of the horizontal curves in the dataset. The lateral clearance is computed based on Ismail and Sayed (2010). The restrictive element for the present study was the presence of a median barrier or side barrier (or both). The lateral clearance is dependent on the type of the curve (right hand or left hand side) and the direction of travel. When the road side barrier is the restrictive element, the lateral clearance is the shoulder width minus half the width of the concrete barrier. If the restrictive element is the median barrier, the lateral clearance is half the median width minus half the width of the concrete median barrier. Figure 3.1 shows a typical cross-section of a right-hand-side horizontal curve. For the Eastbound direction, the restrictive element is the concrete road-side barrier; the lateral clearance is the outer shoulder width. For the Westbound direction it is the median barrier and the lateral clearance would be the inner shoulder width.  52  Figure 3.1 Lateral clearance for a cross-section  3.2.2  Stopping Sight Distance The SSD is the total distance a vehicle travels from the time the driver sees an obstruction on the road ahead and comes to a complete and safe stop. It consists of the brake reaction distance and the braking distance. The former being the distance traveled from the moment the driver sees an obstruction on the road ahead to the moment before the brakes are applied. The braking distance is the distance the vehicle travels until it comes to a complete stop.  The SSD (i.e., the demand variable) is computed as follows                                (3.3) where  53     =  the operating speed (km/h),  PRT  =  the perception reaction time (s),  a  = the deceleration rate (m/s2), and     = t he longitudinal grade (%). 3.3 Data Description The present dataset comprises geometric design features, collision data, and traffic volume data for the Trans-Canada Highway (referred to as Highway 1). These data were obtained from the BC Ministry of Transportation. The geometric design data were extracted from drawings prepared by the Ministry which present data in strip maps with corresponding aerial photographs. The collision data (frequency and locations) occurred from January 1991 to December 1995. This data were collected for a total of 100 segments of horizontal curves some of which had a limited sight distance due to the presence of median barriers or side concrete barriers on the road.  The dataset contained various variables, namely, aggregated (over 5 years) collisions by severity, AADT, radius and length of the horizontal curve, lane and shoulder widths, superelevation and grade. Table 3.1 provides some basic statistics for the relevant data used in the reliability analysis as well as in developing the NB models.  54  Table 3.1 Statistical summary of data set (100 horizontal curves) Description MIN MAX MEAN STDEV I+F Collisions/5yrs 0 26 1.81 3.80 PDO Collisions/5yrs 0 29 2.45 5.25 Total Collisions/5yrs 0 55 4.26 8.65 AADT (veh/day) 36104 202320 60525 34016 Radius (m) 130 900 556 376 Length (m) 100 920 288.12 171.95 Lane width (m) 3.6 4.5 3.67 0.10 Shoulder width (m) 0.5 2.5 1.79 0.51 Superelevation (m/m) 0 0.09 0.04 0.03 Grade (%) -6.3 7.0 -0.08 2.33 3.4 Data Distributions The ASD given in Equation (3.2) is deterministic. On the other hand, the SSD given in Equation (3.3) involves both deterministic (grade) as well as random (operating speed,  , perception reaction time, PRT, and deceleration rate, a) variables. In order to conduct the reliability analysis, the distributions shown in Table 3.2 were assumed.  Table 3.2 The probability distributions for the random input parameters Parameter Mean Standard Deviation Distribution Reference    See below See below Normal Richl and Sayed (2006) PRT 1.5 s 0.40 s Lognormal Lerner (1995)   4.2m /s2 0.60 m/s2 Normal Fambro et al. (1997) 55  For horizontal curves, the probability distribution of the speed is assumed to be normal which is the norm in the literature.  Table 3.3 shows the models used to compute the operating speed. Due to lack of data, the mean of the speed was taken as the average of the speed computed using the speed prediction models and the standard deviation was computed from the variation between those models. For each horizontal curve, there was a corresponding mean and standard deviation of the operating speed computed from the eleven models.  The variables used in the speed prediction models are: R the radius of the curve in m; LC the length of the horizontal curve in m; I the deflection angle of the horizontal curve in degrees and e the superelevation rate in m/m. Table 3.3 Prediction models for operating speed Model Equation R2 Reference V85 = 94.398 – 3188.656/R 0.79 Lamm et al. (1988) V85 = 95.594 – 1.597DC , where DC = 1746.38/R 0.79 Lamm et al. (1999) V85 = exp(4.561 – 0.0058D), where D = 5729.58/R 0.63 Morrall and Talarico (1994) V85 = 102.45 – 0.0037LC-(8995+5.73LC)/R N/A TAC (1999) V85 = 103.66 – 1.95DC 0.80 Ottesen and Krammes (2000) V85 = 102.44 – 1.57DC + 0.012LC – 0.01DC x LC 0.81 Ottesen and Krammes (2000) V85 = 99.61 – 2951.37/R + 0.014LC – 0.131I + 71.82e 0.84 Voight (1996) V85 = 129.88 – 623.10/R1/2 0.78 Kanellaidis et al. (1990) V85 = 95.41 – 1.48DC – 0.012DC2 0.99 Islam and Seneviratne (1994) V85 = 103.03 – 2.41DC – 0.029DC2 0.98 Islam and Seneviratne (1994) V85 = 96.11 – 107DC 0.90 Islam and Seneviratne (1994) 56  3.5 Quantifying the Risk of Design Non-Compliance Since the design surface corresponding to the limit state function (3.1), along with (3.2)-(3.3), is not too “non-linear” FORM is the most suitable approximation approach, as indicated in Section 2.1.4. Thus, for risk quantification, the FORM analysis was performed using the Rt software (Rt, 2010). The distribution of the 100 road segments by Pnc is shown in Figure 3.2. In particular, both the median (0.46) and the mean (0.47) are close but rather high. The 85th percentile is also high (0.88). Thus, about 50% of the curves have Pnc values larger than 0.46, whereas 15% have values larger than 0.88.  Figure 3.2 The cumulative distribution of horizontal curves by Pnc   57  The radius of the horizontal curve and the operating speed are two of the main inputs in the calculation of the Pnc. The nature of the relationship between Pnc on one hand and the corresponding radius of the horizontal curve and operating speed (computed from the models in Richl & Sayed, 2006) on the other hand was further investigated. It is apparent from Figure 3.3 that there is an inverse relationship between Pnc and the radius; as the radius of the horizontal curve increases, the Pnc decreases. In contrast, Figure 3.4 displays a positive relationship between Pnc and the operating speed; at higher speeds the Pnc increases as the drivers would find it more difficult to negotiate the curve. The Pnc increases sharply as the speed increases to 85 km/h but increases steadily at higher speeds. Figure 3.3 A scatter plot of Pnc vs. radius   R² = 0.4152-7-6-5-4-3-2-100 200 400 600 800 1000Ln(Pnc)Radius58  Figure 3.4 A scatter plot of Pnc vs. operating Speed  For each curve there is a specific value for the ASD and a corresponding value of Pnc. A Monte-Carlo simulation study was carried out to investigate and validate the relationship between ASD and SSD without adding the reliability component. Thus, random samples were generated from the probability density functions defined in Table 3.2. Twenty thousand runs were performed. In each run, the model samples a different value for each random variable and calculates the SSD. Figure 3.5 shows a sample of the results of the Monte-Carlo simulation for one of the horizontal curves; the remaining curves showed similar trends. For the majority of horizontal curves, ASD appears at the left-end (far below the mean) of the SSD distribution signaling a serious design problem. The results show that 80% of the horizontal curves have SSD that exceeded the ASD values which is in agreement with the Pnc distribution shown in Figure 3.2. R² = 0.6934-14-12-10-8-6-4-20275 80 85 90 95 100Ln(Pnc)Operating Speed (kmph)59  Figure 3.5 Monte-carlo simulation results  3.6 Safety Performance Functions Incorporating Pnc Current practice can be inept as shown in Figure 3.5 where the drivers’ demand (SSD) exceeds the supply (ASD) provided by current design guidelines. This justifies the need to model the relationship between a reliability component (capturing the uncertainty in random design inputs) ASD=156m AD= 136m ASD=136m ASD=156m 60  and collision frequency. Such models (usually denoted “probabilistic” models) are compared with traditional “deterministic” models in terms of performance and goodness of fit. To this end, two groups of NB models, with and without reliability component (Pnc), were developed using the same dataset. For both groups, several NB models were evaluated. The forward and backward approaches for stepwise-selection were used to identify the variables that significantly affected the frequency of collisions. All of the variables extracted from the dataset for this case study along with Pnc were included in the NB SPFs (2.34) for both severity levels and total collisions. The maximum likelihood method was used to estimate the parameters of the NB models. Different Goodness of Fit statistics, as described earlier, were used to assess the models’ adequacy. PROC GENMOD of SAS was utilized for the estimation of models’ parameters and goodness-of-fit measures (SAS Institute, 2002-2003). For NB models including the reliability component, the t-statistics for all variables were insignificant at the 5% level of significance except for Pnc. The reason behind this is that almost all of the other variables were included in estimating Pnc and adding them to the models would bring no additional value. The selected model was then screened for outliers using the methodology described in Sawalha and Sayed (2006). Accordingly, one road segment was removed from the data set for the Total (tot), three for I+F (injury and fatality) and two for PDO (Property Damage Only).  The NB collision prediction models incorporating the probability of non-compliance Pnc are as follows  61                                 (3.4)                                  (3.5)                                 (3.6)  The estimates, standard errors and t-ratios for the coefficients of the remaining are shown in Table 3.4. The regression coefficients of the logarithms of traffic volumes are all slightly less than 1, significant and positive (as expected) indicating a positive relationship between predicted collisions and AADT. Similarly, there is a significant positive relationship between predicted collisions and Pnc. Thus a change of 0.1 in Pnc corresponds to a relative change in predicted collisions of 0.15 for tot, 0.10 for I+F and 0.18 for PDO. Also, the significance of κ implies the existence of over-dispersion in the data set; thereby justifying the use of the NB Models. Table 3.4 Estimates, standard errors (SE) and t-ratios for NB models incorporating Pnc Parameter Total I+F PDO Est. SE t Est. SE t Est. SE T ln(a0) -14.931 4.56 -3.28 -15.624 4.36 -3.58 -16.516 5.51 -3.00 ln(AADT) 0.895 0.41 2.19 0.900 0.39 2.29 0.974 0.49 1.98 Pnc 1.461 0.62 2.37 1.012 0.57 1.77 1.793 0.76 2.38 1   2.920 0.60 4.90 2.360 0.64 3.62 3.911 0.94 4.17 Figure 3.6 shows the relationship between predicted collisions and Pnc for a road segment with average exposure (length = 288.12 m and AADT = 60525 veh/day). The three curves show an increase of predicted collisions with Pnc. Yet, there is a sharper increase for PDO collisions than 62  I+F collisions. This may be explained by the fact that collisions related to restricted sight distance are likely to be less severe.  In fact, both I+F and total predicted collisions have almost doubled as Pnc increases from 0.1 to 0.5.   Figure 3.6 The relationship between predicted collisions and Pnc  The values for the Scaled Deviance and Pearson χ2 goodness-of-fit measures for the NB models incorporating Pnc are given in Table 3.5. The associated p-values are rather high indicating that these models provide good fits for both severity levels as well as the total predicted collisions.    0123456780.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Predicted AccidentsPncmu(tot) mu(inj) mu(pdo)63  Table 3.5 Goodness-of-fit tests for NB models incorporating Pnc   Severity DF SD p-value χ2 p-value Total 95 91.660 0.578 83.269 0.800 I+F 93 81.117 0.806 88.001 0.627 PDO 94 74.612 0.930 81.867 0.810 Alternative NB collision prediction models incorporating other geometric variables (without Pnc) were considered, where the SPFs are given by (2.34). The models are as follows:                                                               (3.7)                                                               (3.8)                                                              (3.9) The estimates, standard errors and t-ratios for the NB models without Pnc are shown in Table 3.6. Again, the regression coefficients of the logarithms of traffic volumes are significant and positive indicating a positive relationship between predicted collisions and AADT. There is a significant negative relationship between predicted collisions and radius except for I+F. Similarly, there is a significant negative relationship between the predicted collisions and superelevation. Thus, as the radius and/or superelevation increase, the curve becomes less disturbing making it easier (safer) for the driver to negotiate it. Also, the significance of κ confirms the existence of over-dispersion in the dataset.  64  Table 3.6 Estimates, standard errors (SE) and t-ratios for NB models without Pnc Parameter Total I+F PDO Est SE t Est SE t Est SE T ln(a0) -16.108 4.30 -3.74 -18.863 4.01 -4.71 -17.251 4.47 -3.85 ln(AADT) 1.458 0.41 3.55 1.428 0.36 3.95 1.535 0.43 3.56 ln(radius) -0.602 0.24 -2.47 -0.312 0.24 -1.32 -0.631 0.28 -2.22 Superelevation -23.970 6.03 -3.98 -12.327 5.77 -2.13 -31.451 7.09 -4.44 1   1.945 0.45 4.31 1.363 0.47 2.88 2.101 0.62 3.36 To compare the two sets of models (with and without Pnc), the likelihood ratio test (LRT) was carried out with the results presented in Table 3.7. The LRT is based on the difference in the scaled deviance SD, which is asymptotically distributed as a chi-square with one degree of freedom (DF). From the p-values in Table 3.7, it is apparent that the models incorporating Pnc outperforms the models without Pnc at the 10% level of significance for the total predicted collisions and at the 5% level of significance for both I+F and PDO predicted collisions.  Table 3.7 Testing NB models incorporating Pnc vs. models without Pnc Severity DF LRT p-value Total 1 2.936 0.087 I+F 1 3.858 0.049 PDO 1 4.134 0.042 65  3.7 Summary This chapter investigated the effect of incorporating the Pnc ; the outcome of reliability analysis, into SPFs. The results showed that the inclusion of a risk parameter (Pnc) representing the randomness in the input parameters has generated a better “probabilistic” NB model outperforming the traditional “deterministic NB model. Although this analysis was applied to horizontal curves with restricted sight distance, it can be applied to any other geometric element that needs further development.  Conventional SPFs disregard the spatial correlation between nearby segments as a possible contributor to the occurrence of collisions. Spatial analysis is introduced in the following chapter to deal with the evident spatial nature of road collisions.   66  4 Spatial Analysis and Pnc The parameter estimates in the NB models incorporating Pnc are likely biased due to omitted variables that are correlated with Pnc.  Accordingly, the effect of Pnc might be overstated (or understated, depending on the type of correlation with the omitted variables), because it is also capturing the effect of such omitted variables. It has been argued that spatial dependence can be a surrogate for unknown and relevant covariates, thereby improving model estimation (Congdon, 2006). Hence, spatial analysis is proposed in this Chapter in an attempt to overcome the omitted variables bias. To investigate the effects of incorporating spatial effects in SPFs, an Extended Multiple Membership (EMM) spatial model is developed. Such a spatial model pools information from neighboring sites to improve model estimation. The models under study are developed in a Full Bayes (FB) context via the Markov Chain Monte Carlo (MCMC) simulation techniques using a dataset composed of 257 horizontal segments along the TransCanada Highway in British Columbia. The new data set was obtained by including more horizontal curves that precede and/or follow each horizontal curve in the previous data set in order to establish the spatial relationships. 4.1 Background There are three variations which affect the distribution of collisions and should thereby be considered in the development of SPFs: (i) Poission variation; (ii) heterogeneity (extra-variation) 67  and (iii) spatial effects. The literature has been dedicated to development of SPFs to account of Poisson variation and Poisson extra-variation, which stems from within-site effects reflecting its individual characteristics. Since neighboring sites typically have similar environmental and geographic characteristics, they thereby form a cluster that has similar collision occurrence which is represented by spatial effects. It has been established that: (1) spatial dependence can be a surrogate for unknown and relevant covariates (thereby improving model estimation), and (2) ignoring spatial dependence leads to underestimation of variability (Congdon, 2006). Despite these advantages, the development of SPFs which account for spatial effects has been only recently gaining attention in the literature. Spatial patterns in Honolulu motor vehicle collisions were considered by Levine et al. (1995a,b). They argued using a spatial lag model that attention should be focused on characteristics of neighborhoods and areas, and not just on road systems. A network variant of spatial autocorrelation analysis and Moran's I statistic were used by Black and Thomas (1998) to assess the extent to which the value of a variable on a given segment of a network influences values of that variable on contiguous segments. Certain statistical analysis techniques for spatial data (including quadrant, nearest-neighbor methods and K-function) were assessed by Nicholson (1999) and the nearest neighbor method was found to be the most powerful and robust technique. Conditional Auto-Regressive (CAR) models were used to model the spatial correlation in collision data (Miaou et al., 2003; MacNab, 2004; Miaou and Song, 2005; Aguero-Valverde and Jovanis, 2008; El-Basyouny and Sayed, 2009c). A hierarchical Bayes CAR model was adopted by Miaou et al. (2003) to build model-based risk maps for area-based traffic collisions using 68  county-level vehicle collision records. Then, a multivariate CAR model was considered by Miaou and Song (2005) within a Bayesian framework. The results showed that the inclusion of a spatial component in the collision prediction model significantly improved the overall goodness of fit performance of the model and affected the site-ranking results. MacNab (2004) used hospital data on the frequency and severity of collisions for 83 local health areas in British Columbia (BC) to demonstrate how spatial (CAR) Bayesian modeling techniques could be implemented to assess potential risk factors measured at group (area) level. The proposed unified modeling framework enables thorough investigations into associations between injury rates and regional characteristics, residual variation and spatial autocorrelation. A FB hierarchical approach with CAR effects for the spatial correlation terms was adopted by Aguero-Valverde and Jovanis (2008) for the analysis of road collision frequency at the segment level. It was found that the models with spatial correlation showed significantly better fit to the data than the Poisson Lognormal (PLN) model. Moreover, spatial correlation seems to have a potential for reducing the bias associated with model misspecification. The combined impacts of temporal and spatial correlations on collisions were considered by Aguero-Valverde and Jovanis, 2006; Wang and Abdel-Aty, 2006). The former authors compared FB hierarchical models (including the combined effects) to NB models. The results showed that spatial correlation, time trend, and space–time interactions were significant in the FB injury collision models. The generalized estimating equations technique with the negative binomial link function was used by Wang and Abdel-Aty (2006) for temporal and spatial analyses of rear-end collisions at signalized intersections. High spatial correlations were found between the 69  intersections for rear-end collisions and certain intersection-related variables were significantly influencing rear-end collision occurrences. Further, the introduction of spatial correlation has resulted in noticeable changes in the estimates of several regression coefficients, indicating possible model misspecification under NB. Multiple Membership (MM) models (Goldstien, 1995; Goldstien et al., 1998; Langford et al., 1999) provide an alternative approach to account for spatial correlations. El-Basyouny and Sayed (2009c) considered a variation of the MM model (Extended MM or EMM) to study the effect of clustering road segments within the same corridor on spatial correlation analysis. Full Bayes estimation was used by means of the Markov Chain Monte Carlo methodology to estimate the parameters using 281 urban road segments in Vancouver, British Columbia. The fitted CAR and MM models demonstrated significant estimates for both heterogeneity and spatial correlation parameters. The best fit model was EMM followed by CAR. Furthermore, a significant portion of the total variability was explained by the spatial correlation. A significant correlation was also found between the heterogeneity and spatial effects. The results also showed that corridor variation was a major component of total variability and that the spatial effects have been considerably alleviated by clustering segments within the same corridor. 4.2 Spatial Poisson Models Let Yi  denote the number of collisions at site i (i =1,…,n). It is assumed that collisions at the n sites are independent and that 70                     (4.1) Over-dispersion due to unobserved or unmeasured heterogeneity is addressed by assuming that                     (4.2) where    is determined by a set of covariates representing site-specific attributes and a corresponding set of unknown regression parameters, whereas the term    represents a random effect (Poisson extra-variation). The PLN regression model is obtained by the assumption                           (4.3) Spatial Poisson models can be defined by incorporating a spatial effect in Equation (4.2) as follows                         (4.4) The spatial component     suggests that sites that are closer to each other are likely to have common features affecting their collision occurrence. As noted by Miaou and Lord (2003), random variations across sites may be structured spatially due to the complexity of the traffic interaction around locations. Guided by the results in the literature (Nicholson, 1999; Aguero-Valverde and Jovanis, 2008) only first-order spatial autocorrelation models were considered. 4.2.1 Conditional Auto-Regressive (CAR) Models Let   ,      and     represent the number of neighbors of site i, the set of neighbors of site i and the set of all spatial effects except   , respectively. The CAR model is given by 71                                  ,                                  ,  (4.5) where 2s  represents the spatial variance. Equation (4.5) is based on an adjacency-based proximity measure: wij  = 1 if sites i and j are neighboring sites and wij  = 0 otherwise. The conditional mean is the mean of adjacent spatial effects, while the conditional variance is inversely proportional to the number of neighbors.  4.2.2 Multiple Membership (MM) Models An effective way to account for spatial correlation is to use MM models (Goldstein, 1995; Goldstein et al. 1998; Langford et al. 1999) where each site is considered a member of a higher level unit which contains its nearest neighbors.  For a first-order spatial autocorrelation model, let     and     represent the random effects of site i and its effects on its neighbors, respectively. The spatial effect of site i is given by                 /    . (4.6) To model a correlation between    and    it is assumed that                        ,                                      (4.7)   The MM models allow a direct interpretation of the model parameters, as     and      represent marginal variances. Furthermore, the parameter  measures the strength of the association between the unstructured and structured (spatial) effects.  72  4.2.3 Extended Multiple Membership (EMM) Models Typically, the n sites under consideration belong to K mutually exclusive clusters. In such cases, an additional variance component can be included in the MM model to allow for the possibility that different clusters have different collision risks because traffic, geometric and environmental conditions vary across clusters (El-Basyouny and Sayed, 2009c). Suppose that the ith site belongs to cluster }K,...,2,1{)i(c  . The extended MM model is given by                             ,  (4.8) where                       and     denotes the additional variance component representing the variation among different clusters.  4.3 Data Description The previous dataset was augmented by adding data for the preceding and subsequent segments. These segments could be other horizontal segments with or without sight distance restriction and/or tangent segments. Thus, the present data were collected for 257 horizontal curves that are clustered into 18 stretches for which the traffic volume was constant. The 18 stretches vary in size (i.e., number of segments in a stretch) from a minimum of 3 to a maximum of 54 with a mean of 14.28 segments per stretch. The geometric features were used to compute the probability of non-compliance (Pnc) via FORM. Since the premise of the limit-state function is the presence 73  of a barrier that restricts the sight distance, tangent segments and horizontal segments without sight distance restriction had Pnc values of 0. The dataset contained the following variables: length, AADT, probability of non-compliance and the number of collisions by severity (total, I+F, PDO). Table 4.1 provides some basic statistics of the relevant data used to develop the PLN and EMM models.   Table 4.1 Statistical summary of the entire dataset for spatial analysis of segments Variable MIN MAX MEAN STDEV Length in meters (L) 100 920 244.4 155.8 AADT (V) 36104 87168 51112 10239 Pnc 0 0.98 0.16 0.29 Total collisions 0 50 2.40 5.56 I+F collisions 0 17 1.00 2.27 PDO collisions 0 33 1.40 3.64 The additional data on the number and location of adjacent segments were extracted from drawings prepared by the Ministry of Transportation which present data in strip maps with corresponding aerial photographs. 4.4 Full Bayes Methodology 4.4.1 Prior Distributions Prior distributions are a main requirement to obtain full Bayes estimates as they reflect prior knowledge about the parameters of interest. The elicitation of priors in generalized linear models 74  and/or collision data analysis is discussed in the literature (Bedrick et al., 1996; Schluter et al., 1997; Ibrahim and Chen, 2000; Washington and Oh, 2006 ; Miranda-Moreno et al., 2007). Nevertheless, the most common prior distributions are the diffused normal distributions (with zero mean and a large variance) for the regression parameters and Gamma (0.001, 0.001) or Gamma (1, 0.001) for the inverse dispersion parameters (Miaou et al., 2003; Congdon, 2006; Aguero-Valverde and Jovanis, 2008, El-Basyouny and Sayed, 2009c). A )m,P(Wishart  prior is usually assumed for the precision matrix 1 , where P and m represent the prior guess at the order of magnitude of 1  and the degrees of freedom, respectively. To represent vague prior knowledge, m is usually chosen to be as small as possible, i.e., 2)(  rankm  (Spiegelhalter et al. 2005).  It is important to note that the selection of priors is not entirely an analytical decision. Analysts should seek informative (or at least semi-informative) priors reflecting their previous knowledge about the subject matter. If such information is absent, the above vague (diffused) priors are usually used to reflect the lack of information.  In the present analysis, as no prior information is available, diffused normal distributions with zero mean and variance = 1002 were used for the regression parameters, a Gamma (0.001, 0.001) was used for       and a Wishart distribution with a 2 x 2 identity matrix (Congdon, 2006) and two degrees of freedom was used for 1 . 75  4.4.2 Posterior Distributions The software WinBUGS 2.2.0 (windows interface of OpenBUGS) was used to sample the posterior distributions using MCMC techniques. MCMC methods are used to repeatedly sample from the joint posterior distribution. The repeated iterations are used to generate sequences of random points with a distribution which converges to the target posterior distribution. A sub-sample is used to monitor this convergence to ensure that the posterior distribution has been “found”. The BGR (Brooks-Gelman-Rubin) statistics (Brooks and Gelman, 1998), ratios of the Monte Carlo errors relative to the standard deviations of the estimates, and trace plots for all model parameters are monitored for convergence. The (burn-in) sub-sample is then excluded from further analysis and the remaining iterations are used to estimate the parameters, to evaluate the model’s performance and to make inferences. The Deviance Information Criteria (DIC) is used for models’ comparisons (Spiegelhalter et al., 2005). 4.5 Results and Discussion As mentioned in Section 4.3, the horizontal segments are clustered into 18 stretches for which the traffic volume was constant. Therefore, the product of length and AADT was used to represent exposure (L x V). Inclusion of this term ensures that traffic exposure is accounted for when estimating the safety benefits of some specific policy alternatives. Two models without and with spatial effects were developed, namely, PLN and EMM. Their posterior summaries appear in Tables 4.2 and 4.3, respectively. These estimates were obtained 76  via two chains with 20000 iterations 10000 of which were excluded as a burn-in sample using WinBUGS. The exposure and Pnc statistics were centered (by subtracting the respective means) to speed up convergence and lessen autocorrelations for the regression parameters. The BGR statistics were below 1.2 which confirm that convergence has occurred, the ratios of the Monte Carlo errors relative to the standard deviations of the estimates were below 0.05 (the ad hoc benchmark) and the trace plots for all model parameters indicated convergence. Table 4.2 summarizes the parameter estimates and their associated standard errors for the total, I+F and PDO collisions under the PLN model. As expected, the regression parameters for exposure were positive and significant suggesting that an increase in traffic volume or segment length would lead to an increase in number of collisions. Further, the regression coefficients associated with the exposure terms have values larger than one, suggesting that the moderating effects of exposure are non-linear with increasing rates.  The regression coefficients of Pnc were also positive for all severity levels suggesting that an increase Pnc would lead to an increase in number of collisions. Further, the effects of Pnc were significant for all severity levels confirming the conclusions in Chapter 3.   77  Table 4.2 Estimates and standard errors (SE) for PLN PLN Total I+F PDO Est. SE Est. SE Est. SE Intercept -20.96** 2.43 -20.69** 2.93 -23.77** 3.30 ln(L x V) 1.24** 0.15 1.18** 0.17 1.36** 0.20 Pnc 1.25** 0.49 0.90* 0.53 1.61** 0.52     3.10** 0.59 2.84** 0.65 3.51** 0.72 DIC 646.3 472.6 496.4 ns Not significant at the 0.10 level, * Significant at the 0.10 level, ** Significant at the 0.05 level. It should be noted also that the estimates of     in Table 4.2 were all high and significant indicating the existence of considerable Poisson extra-variation in I+F, PDO and total collisions justifying the use of the PLN hierarchy. Since each segment is a multiple member of multi-level clusters (cluster of neighbors within a stretch of segments), the EMM model was used for the spatial analysis, in preference to CAR, in order to account for the variation among neighbors as well as that among stretches. Table 4.3 summarizes the parameter estimates and their associated standard errors for the total, I+F and PDO collisions under the EMM model. The estimates of the regression coefficients associated with the exposure terms have changed only slightly. Hence, the exposure results under EMM confirm those obtained under PLN in that an increase in traffic volume or segment length would lead to an increase in number of collisions and that the moderating effects of exposure are non-linear with increasing rates.  78  Table 4.3 Estimates and standard errors (SE) for EMM EMM Total I+F PDO Est. SE Est. SE Est. SE Intercept -19.58** 2.41 -18.37** 2.92 -20.57** 2.71 ln(L x V) 1.22** 0.15 1.15** 0.18 1.28** 0.17 Pnc 0.85** 0.34 0.80** 0.40 1.04** 0.42     1.08** 0.26 1.24** 0.38 1.08** 0.34     0.31** 0.17 0.59** 0.35 0.48** 0.32  0.29 ns 0.25 0.62** 0.19 -0.13 ns 0.31     18.69** 13.71 31.21** 24.87 21.54** 14.52 DIC 603.5 437.9 477.8 ns Not significant at the 0.10 level, * Significant at the 0.10 level, ** Significant at the 0.05 level. Although the regression coefficients of Pnc are still positive and significant for all severity levels, they were reduced under EMM. Thus, the overstated effects (due to omitted variables bias) of Pnc under PLN were sized down under EMM. The percentages of reduction were 11%, 35% and 32%, for I+F, PDO and total collisions, respectively.  Similarly, although the estimates of     were still significant indicating the existence of Poisson extra-variation for all severity levels, they were considerably reduced due to the EMM spatial specifications in the SPFs. The percentages of reduction under EMM over PLN were 56%, 69% and 65%, for I+F, PDO and total collisions, respectively.  On the other hand, the significance of      and       justify the use of the EMM spatial analysis, as the results show that the contributions of the stretch (cluster) variation to the total variation in 79  traffic collisions (about 92%) outweighed that of the spatial variation (about 2%) for all severity levels. It is interesting to note that the estimates of the correlation coefficient () between the Poisson extra-variation and spatial variation was significant only for I+F collisions but not for the PDO and total collisions. That significant correlation coefficient indicates that the neighbors of road segments with high I+F extra-variation tend to have high I+F spatial variation as well and vice-versa. According to the DIC guidelines (Spiegelhalter et al., 2005), a comparison of the DIC values in Tables 4.2 and 4.3 reveals that the EMM models have outperformed the corresponding PLN models as there were significant reductions in DIC of 34.7, 18.6 and 42.8 for I+F, PDO and total collisions, respectively, which far exceed the usual benchmark of 10 (Spiegelhalter et al. (2005). 4.6 Summary In this Chapter, the effects of incorporating spatial effects in SPFs were investigated. The PLN and EMM models were estimated in a Full Bayes context via MCMC simulation techniques using a dataset composed of 257 horizontal segments along the TransCanada Highway in British Columbia. The results of this Chapter provided a strong evidence of the significance of integrating spatial effects in SPFs. In the present study, the spatial analysis overcame the model misspecification resulting from incorporating only exposure and Pnc in the SPFs as relevant covariates might have been omitted. 80  The past chapters have investigated the effects of incorporating random variations on geometric design elements in SPFs and established the relationship between the ensuing risk-based measure and collisions. The next step is to provide an application of how this information could be used to improve safety through a proactive approach.    81  5 Cross-Section Risk Optimization There are only a limited number of earlier studies that focused on the optimization of highway design (Jha and Schonfeld, 2000; Park and Saccomanno, 2005; Ismail and Sayed, 2010). This Chapter presents a methodology based on that of Ismail and Sayed (2010) for selecting a suitable combination of cross-section elements with restricted sight distance which would minimize the collision rate for the entire section. This methodology will be applied to produce re-dimensioned cross-sections with reduced collisions and consistent risk levels.   5.1 Background In a typical cross-section there are six cross-section elements of interest to the designer; the optimization is carried out on these elements under the assumption that the median width has already been selected to be of minimum allowable width and all possibilities for larger horizontal curve radii have been investigated. The elements being studied are: outer shoulder width, lane width of both traffic lanes, and inner shoulder width for the inside carriageway; and inner shoulder width, lane width of both traffic lanes, and outer shoulder width for the outside carriageway.  Figure 5.1 shows a typical Right-Hand-Side (RHS) horizontal curve and the elements that constitute the width of the roadway. The restrictive elements for the inside and outside carriageways are the roadside and median barriers, respectively. The lateral clearance for the 82  inside carriageway is the outer shoulder width, while the lateral clearance for the outside carriageway is the inner shoulder width.   Figure 5.1 Typical cross-section in a RHS horizontal curve  Source: Ismail and Sayed (2010) The problem of re-dimensioning cross-section elements to optimize risk was considered by Ismail and Sayed (2010) who developed the following multiple objectives function                                                                          , (5.1) where C(.) =  objective or cost function that is inversely proportional to the design desirability, I = input vector composed of six elements that represents a dimensioning scenario, 83  I1,2,3      = the first three elements of I, the outer shoulder width, lane width of both traffic lanes and inner shoulder width for the inside carriageway, I4,5,6   = the second three elements of I, the inner shoulder width, lane width of both traffic lanes and outer shoulder width for the outside carriageway,  CMF(.) = the weighted average of the compound collision modification factors calculated for both carriageways (Harwood et al., 2003). The compound collision modification factors are calculated as follows     (I1,2,3)  = exp [-0.021(3.28I1  - 10) -0.047(3.28I2  - 12) -0.021(3.28I3  - 4)]  (5.2)      (I4,5,6)  = exp [-0.021(3.28I6  - 10) -0.047(3.28I5  - 12) -0.021(3.28I4  - 4)], (5.3)                                             ,      (5.4) Io         = input vector composed of six elements that represents the cross-section before Optimization, Pnci       = probability of non-compliance for the inside carriageway, Pnco       = probability of non-compliance for the outside carriageway, Vi         = expected traffic volume on the inside carriageway, Vo         = expected traffic volume on the outside carriageway, 84              = weight factor assigned for the first cost function component; risk balance between both carriageways,             = weight factor assigned for the second cost function component; identify the increase/decrease of the collision risk,             = weight factor assigned for the third cost function component; weighted average  risk for both carriageways. In order to respect the permitted right-of-way, optimizing the dimensions of the cross-section elements was subject to three constraints. The first constraint is related to the total width of the roadway segment which must remain the same before and after optimization so as to make use of the total width allocated to the highway. The second and third constraints are related to the upper and lower bounds of the various cross-section elements to avoid unrealistic cross-section dimensions.                    (5.5)                    (5.6)                     (5.7) Ismail and Sayed (2010) considered nine case studies which are part of two highway developments in British Columbia, Canada. The cross-sections belong to horizontal curves with 85  restricted sight distance due to the presence of roadside concrete barriers, median barriers, roadside structures and bridge parapet. No additional right of way is available to compensate for these restrictive elements and as such the optimization is required to re-dimension the elements. A summary of the nine cross-sections’ elements are given in Table 5.1.  Ismail and Sayed (2010) used a Sequential Quadratic Programming (SQP) algorithm for the optimization under different choices of the weight factors (  ) assuming equal traffic volumes for both carriageways. They were successful in reducing the average risk with balanced risk for both carriageways without consequent increase in expected collisions. 5.2 Cross-Section Risk Optimization using New CMFs for Restricted Sight Distance on Horizontal Curves The optimization of cross-section design to minimize the risk associated with restricted sight distance is considered along the lines of Ismail and Sayed (2011) and using the same nine cross-sections whose elements are described in Figure 5.1. An important difference between the methodology used in this thesis and that of Ismail and Sayed is the introduction of new CMFs incorporating the reliability component (Pnc).  The regression coefficients of Pnc used in the new CMFs are 1.461, 1.012 and 1.793 for total, I+F and PDO collisions, respectively (see Table 3.4). The optimization process is illustrated in the sequel using the coefficient 1.461 corresponding to total collisions. Similar optimization schemes could be developed for I+F and PDO collisions.  86  5.2.1 Objective Function Recall Equation (5.1) and define new CMFs as follows       (           =      (          * exp (1.461*     , (5.8)       (           =      (          * exp (1.461*     , (5.9)                                                .      (5.10) The current objective function is then given by Equations (5.1), (5.8), (5.9) and (5.10). The constraints remain the same as specified in Equations (5.5), (5.6) and (5.7). 87  Table 5.1 Summary of cross-section dimensions for the different case studies (m) Case Radius Inside Carriageway Median Width Outside Carriageway Longitudinal Slope Right Hand Curve = 1 Left Hand Curve = 0 Outer Shoulder Lane Width Inner Shoulder Inner Shoulder Lane Width Outer Shoulder 1.1 440 2.5 3.7 1.15 0.6 1.15 3.7 2.5 -0.7% 1 1.2 440 2.5 3.7 1.15 0.6 1.15 3.7 2.5 1.5% 0 1.3 440 2.5 3.7 1.15 0.6 1.15 3.7 2.5 -1.3% 1 1.4 440 2.5 3.7 1.15 0.6 1.15 3.7 2.5 -0.4% 0 1.5 440 2.5 3.7 1.15 0.6 1.15 3.7 2.5 0.1% 1 2.1 450 2.5 3.6 1.7 0.6 1.7 3.6 2.5 -3.9% 1 2.2 320 2.5 3.6 1.7 0.6 1.7 3.6 2.5 0.6% 1 2.3 350 2.5 3.6 1.7 0.6 1.7 3.6 2.5 -3.3% 1 2.4 350 2.5 3.6 1.7 0.6 1.7 3.6 2.5 -2.3% 1   88  5.2.2 Optimization Algorithm In order to find the solution to the non-linear objective function (5.1), the optimization algorithm had to be an iterative method which allows for the non-linear constraints (5.5)-(5.7). The Sequential Quadratic Programming (SQP) method for nonlinearly constrained optimization generates steps by solving quadratic sub-problems. The SQP method solves the non-linear problem directly rather than converting it to a sequence of unconstrained minimization problems (Gockenbach, 2003). The underlying principle is that at each step/iteration, a local model of the optimization problem is constructed and solved yielding one step toward the solution of the original problem. This sequential approach can be used both in line search and trust-region frameworks and it is appropriate for small and large problems. Although each iteration of the SQP algorithm requires a solution to a quadratic program, it converges very rapidly and finds approximate solutions with good precision (Bonnans et al., 2009). The SQP algorithm was used to conduct the optimization process by means of the “fmincon” function in Matlab. The algorithm requires the user to supply an initial starting vector representing the six cross-section elements. As with any optimization problem, there is a risk of converging to a local minimum as opposed to a global minimum. The function “GlobalSearch” was then used in order to help the algorithm reach a global as opposed to a local minimum. It is applicable in cases where solutions could contain multiple maxima or minima. To further ensure that the solution provided by Matlab was indeed the global minimum, various random starting points were chosen to ensure that the algorithm was not searching within the same area. The reliability component of the objective function was computed using the FERUM software 89  available for the Matlab platform. The optimization algorithm was devised in Matlab and since the reliability algorithm was part of the iterative process, FERUM was used in the same platform. 5.2.3 Cross-Section Optimization Results The optimization was conducted for the nine case studies presented in Table 5.1. There were three different scenarios (listed in Table 5.2) carried out to investigate the effect of various components on the overall objective function.  Table 5.2 Summary of the scenarios Scenario α1 α2 α3 1. Minimizing CMF ratio 0 1 0 2. Minimizing CMF ratio & balancing risk across both carriageways 1 1 0 3. Balancing risks, while minimizing average risk & collisions 1 1 1 The first scenario (0,1,0) allows the investigation of the isolated effect of optimization by minimizing the CMF ratio. However, this method might produce results with unbalanced risk for each carriageway. To circumvent this, the second scenario (1,1,0) is studied, which allows for minimizing the CMF ratio subject to having balanced risk for each carriageway. The third scenario (1,1,1) is studied to help designers optimize the dimensions of the cross-section elements by balancing the risks across both carriageways while minimizing the average risk as well as collisions. Table 5.3 shows the dimensions of the optimized cross-sections for the nine case studies under consideration. 90  Table 5.3 Summary of optimization results for the three scenarios 1, 2 and 3, respectively Case Inside Carriageway Outside Carriageway Objective Function Pnc current Pnc optimal obj fun components Outer Shoulder Lane Width Inner Shoulder Inner Shoulder Lane Width Outer Shoulder Inner Outer Inner Outer Pnc ratio CMF ratio Pnc avg 1.1 3.03 3.66 0.74 2.98 3.66 0.64 0.60 0.30 0.71 0.19 0.16 1.19 0.60 0.18 1.2 3.03 3.63 0.73 2.96 3.63 0.72 0.59 0.23 0.78 0.14 0.23 1.71 0.59 0.19 1.3 2.97 3.65 0.84 2.97 3.65 0.61 0.61 0.32 0.69 0.22 0.15 1.50 0.61 0.19 1.4 3.02 3.66 0.71 2.98 3.66 0.66 0.59 0.26 0.74 0.16 0.19 1.18 0.59 0.18 1.5 3.02 3.66 0.67 2.98 3.66 0.71 0.59 0.27 0.73 0.17 0.18 1.08 0.59 0.18 2.1 3.01 3.62 1.74 2.96 3.61 0.66 0.73 0.45 0.39 0.31 0.09 3.45 0.73 0.20 2.2 2.99 3.66 0.85 3.00 3.66 1.45 0.68 0.48 0.75 0.34 0.38 1.12 0.68 0.36 2.3 2.93 3.64 1.90 2.81 3.64 0.68 0.72 0.56 0.56 0.23 0.44 1.88 0.72 0.33 2.4 2.91 3.63 0.85 2.92 3.61 1.69 0.71 0.53 0.61 0.25 0.42 1.65 0.71 0.34    91  Case Inside Carriageway Outside Carriageway Objective Function Pnc current Pnc optimal obj fun components Outer Shoulder Lane Width Inner Shoulder Inner Shoulder Lane Width Outer Shoulder Inner Outer Inner Outer Pnc ratio CMF ratio Pnc avg 1.1 2.76 3.66 0.92 2.55 3.66 1.15 1.66 0.30 0.71 0.24 0.24 1.00 0.66 0.24 1.2 2.51 3.66 1.02 2.98 3.66 0.87 1.62 0.23 0.78 0.23 0.23 1.00 0.62 0.23 1.3 2.97 3.66 0.81 2.58 3.66 1.02 1.66 0.32 0.69 0.22 0.22 1.00 0.64 0.22 1.4 2.58 3.65 1.10 2.70 3.65 1.02 1.65 0.29 0.72 0.25 0.25 1.00 0.65 0.25 1.5 2.78 3.62 0.91 2.81 3.62 0.95 1.63 0.27 0.73 0.22 0.22 1.00 0.63 0.22 2.1 2.75 3.66 1.44 1.73 3.64 2.38 1.93 0.45 0.39 0.37 0.37 1.00 0.93 0.37 2.2 2.79 3.66 1.24 2.96 3.66 1.30 1.71 0.48 0.75 0.39 0.39 1.00 0.71 0.39 2.3 2.91 3.66 1.28 2.02 3.66 2.07 1.84 0.56 0.56 0.44 0.44 1.00 0.84 0.44 2.4 2.87 3.63 1.62 2.24 3.64 1.61 1.81 0.53 0.61 0.43 0.43 1.00 0.81 0.43   92  Case Inside Carriageway Outside Carriageway Objective Function Pnc current Pnc optimal obj fun components Outer Shoulder Lane Width Inner Shoulder Inner Shoulder Lane Width Outer Shoulder Inner Outer Inner Outer Pnc ratio CMF ratio Pnc avg 1.1 2.92 3.66 0.76 2.70 3.66 1.00 1.84 0.30 0.71 0.21 0.21 1.00 0.63 0.21 1.2 2.52 3.66 1.03 2.99 3.66 0.83 1.84 0.23 0.78 0.23 0.23 1.00 0.62 0.23 1.3 2.98 3.66 0.80 2.59 3.66 1.01 1.87 0.32 0.69 0.22 0.22 1.00 0.64 0.22 1.4 2.87 3.66 0.78 2.90 3.66 0.84 1.81 0.29 0.72 0.20 0.20 1.00 0.61 0.20 1.5 2.90 3.47 0.98 2.90 3.52 0.93 1.84 0.27 0.73 0.21 0.21 1.00 0.63 0.21 2.1 2.75 3.66 1.44 1.73 3.64 2.38 2.30 0.45 0.39 0.37 0.37 1.00 0.93 0.37 2.2 2.81 3.66 1.25 2.99 3.66 1.24 2.08 0.48 0.75 0.38 0.38 1.00 0.70 0.38 2.3 2.93 3.66 1.23 2.03 3.66 2.10 2.27 0.56 0.56 0.44 0.44 1.00 0.83 0.44 2.4 2.87 3.63 1.61 2.24 3.64 1.61 2.24 0.53 0.61 0.43 0.43 1.00 0.80 0.43   93  5.2.4 Discussion The dimensions of the cross-section elements before and after optimization differ significantly as can be seen by comparing the dimensions displayed in Table 5.1 to Table 5.3. Before the optimization, the reliability outcomes Pnci and Pnco, which represents the risk measures for the inside and outside carriageways under the current dimensions, were not only high but were also highly unbalanced. After optimization the following conclusions are evident: 1. The first scenario, which involves only the middle component of the objective function (5.1), exhibits the highest reduction in collisions for all combinations of the weighting factors. However, although the results show a reduction in the probability of non-compliance after optimization, the probabilities for both carriageways are still unbalanced. This would mean that collision risk is likely to be higher for one carriageway than the other. 2. It seems that reducing collisions and balancing risk across the two carriageways are competing objectives. This is evident in the second scenario, which involves only the first two components of the objective function (5.1), where the reduction in Pnc ratio to its minimum value of one leads to an increase in the CMF ratio. However, it was still possible to achieve a better overall decrease in risk compared to before the optimization. 3. The third scenario which involves all three components of the objective function (5.1), achieves risk balance (Pnc ratio of one) as well as risk and collision reductions. 4. For all scenarios, it was assumed that the traffic volume is expected to be equal for both carriageways. If that is not the case, practitioners can allow different traffic volumes to be 94  incorporated into the optimization process. This more customized approach would assist the designers to consider indirect effects that highly influence the outcome of the design process. 5. Upon investigating the results of the optimization process, it is noticeable that the inner shoulder width of the inside carriageway and the outer shoulder width of the outside carriageway are consistently assigned lower values. This can be explained by the fact these values are not included in the calculation of the probability of non-compliance. For instance, to reduce the Pnc for the inside carriageway, the objective function will attempt to assign a larger value for the outer shoulder. Since the total width of the roadway segment must remain the same, the objective function will balance this by assigning a lower value to the inner shoulder. A similar argument holds for the outside carriageway. 6. Comparing the current results with those of Ismail and Sayed (2011), it is apparent that risk balance (across carriageways) and overall risk reduction have been accomplished. Yet, while Ismail and Sayed have attained these two objectives without increasing the CMF ratio, the current approach has made it possible to optimize the dimensions of the cross-section elements so that an additional reduction in collisions is also realized. This significant reduction is due mainly to the established relationship between Pnc as a reliability-based risk measure and collisions that is represented in the CMF ratios (5.8) and (5.9).   95  5.3 Summary This Chapter presented an application of how the SPF developed in earlier chapters could be used when designing a cross-section with the aim of reducing collisions. Before optimization, the Pnc values were high and unbalanced across carriageways, suggesting that one carriageway was less safe than the other. However, after the optimization was carried out, the risk has not only become consistent, but has been also considerably reduced. In addition, further collision reductions have been achieved upon accounting for the random variations due to drivers’ behavior. The methodology of this chapter, which is built upon that of Ismail and Sayed (2011), adds great value to this area of research as it allows practitioners to identify and modify the dimensions of the elements with restricted sight distance in order to reduce not only risk, but also collisions. As the use of deterministic values for design inputs does not account for the uncertainty, reliability analysis is needed to handle random inputs and assess the ensuing risk. Once the outcomes of reliability analysis (e.g., Pnc) are represented in the CMF ratio, it becomes possible to minimize collisions as a means of improving safety not just reducing the risk (Pnc).  96  6 Summary, Conclusions, Contributions and Future Research 6.1 Summary and Conclusions  Several studies have noted the stochastic nature of geometric design variables and parameters and recommended the adoption of a probabilistic design approach such as reliability analysis. Reliability analysis can be used to evaluate the risk associated with particular design features and can be most useful in complicated design situations where it may be difficult to find collision prediction models and collision modification factors that adequately describe the design scenario.  However, one main factor that has been inhibiting a wider use of reliability or risk-based geometric design is the lack of an established link between reliability measures and objective safety measures such as collisions. This thesis developed safety performance functions that incorporated the probability of non compliance Pnc as a measure of design risk. Using a dataset on horizontal curves that comprises geometric design features, collision and traffic volume data for the TransCanada Highway in British Columbia, three safety performance functions were developed relating the probability of non compliance (due to restricted sight distance) to total, severe (I+F), and property damage only (PDO) collisions.  The three SPFs provided good fits to the data and showed that predicted collisions have statistically significant positive relationships with Pnc, i.e., lower risk segments are associated with lower predicted collisions as expected. An alternative model based on AADT, radius and superelevation was considered in order to compare NB models with and without reliability-based 97  risk measures. The conclusion was that the model with Pnc outperformed the alternative model without Pnc. For the total predicted collisions, the likelihood ratio test was significant at the 10% level while being significant at the 5% level for I+F and PDO. Therefore, using such a reliability-based risk measure as Pnc to develop SPFs did not only provide a good fit to the dataset at hand, but also improved the fit over the traditional NB models for both severity levels as well as the total predicted collisions. The significance of the present results stems from (i) quantifying the inherent random variation associated with such important input variables as operating speed, perception reaction time and driver deceleration rate in terms of the Pnc risk measure, and (ii) establishing the relationship between reliability-based risk measures and predicted collisions. This latter relationship can be used to aid the decision maker in determining the safety implications of deviating from geometric design standards and in quantifying the safety level built in design values that are deemed acceptable (or unacceptable).  As the parameter estimates in the NB models incorporating Pnc are likely to be overstated due to omitted variables that are correlated with Pnc, spatial analysis was used in order to overcome the omitted variables bias. Using EMM spatial specifications for the SPFs have indeed overcome the models’ misspecification and the results of the spatial analysis provided strong evidence supporting the integration of spatial effects in SPFs.  The results obtained from the reliability analysis of restricted sight distance on horizontal curves and from fitting the SPFs incorporating reliability-based risk measures are useful for researchers 98  in their continuous endeavor to improve the performance of models used for developing geometric design guidelines. To emphasize this point, a case study was applied to nine cross sections on TransCanada highway. The purpose of the analysis was to optimize the roadway cross section by minimizing collisions and overall (average) risk, while maintaining balanced risks for both carriageways. A multiple objective function comprising three different components was developed. The first component aimed at reducing the number of collisions after the optimization. The second component dealt with minimizing the overall risk of collisions for both directions of travel. The third component was intended to balance the risk of the two roadway carriageways. Six different geometric elements were optimized using the Sequential Quadratic Programming Algorithm (SQP). The results showed that an additional reduction in collisions can be realized by incorporating the reliability component (Pnc) in the optimization process. 6.2 Research Contributions This research has contributed to the state-of-the-art literature of geometric design of roadways in several ways. Firstly, empirical data were used to establish the link between the theoretical design incompliance and road collisions. It was shown that the incorporation of the design incompliance in Safety Performance Functions (SPFs) can improve the predictive ability of the model and explain considerable data variations. A second major contribution of this research is accounting for spatial effects along with the probability of non-compliance in the development of SPFs. The proposed models fitted the data significantly better than their traditional counterparts and the nature of the relationships between different variables and collisions occurrence became 99  more evident. The last contribution of this thesis is presenting an approach to design the cross section elements of a roadway facility based on three different safety criteria.  In summary, the current study can be considered a forward step towards safety-oriented geometric design of roadways, which is gaining more ground and attention, especially with the huge increase of fatalities on highway networks. A general framework of how risk-based design could improve safety was presented. Although the main geometric feature studied was horizontal curves with restricted sight distance, the proposed framework could be adapted to other geometric features.    6.3 Future Research Although the current research has covered many aspects of the described methodologies, still, some areas need further refinement and investigation. For example, the probability distributions of the design inputs need to be identified through field observations. The present information is either not very reliable (e.g. speed distribution is assumed to be normal and the mean and standard deviation are obtained from models) or has not been updated (e.g. PRT and deceleration rate). The very basis of reliability analysis is founded on these probability distributions, the more accurate these distributions the more credible the results will be. In regard to the speed distribution, a sensitivity analysis can be carried out using different values of the standard deviation to assess the sensitivity of the reliability analysis outcome (Pnc) to the various assumptions of the speed distribution. Moreover, data collection could be facilitated by means of automatic techniques through video tracking. Information about the respective variables would 100  be more accurate as they are depicted from real-time case-studies rather than controlled experiments. The variables in the present study are all considered to be statistically independent, while there is no information that contradicts this assumption; it would be interesting to investigate this and evaluate how the correlation among input variables would affect the outcomes of reliability analysis.   Other potential study directions would be to investigate the alignment with overlapping horizontal and vertical curves and the effect of this overlap on safety of the roadway. There have been several studies that explored the effect of overlapping alignment; however, coupling this with reliability analysis should provide more insight. Comparisons between overlapping alignment and horizontal/vertical alignment alone would allow designers to evaluate the safest way to design overlapping alignments.  Current design guidelines can be evaluated to identify the risk associated with each design feature. However, future research could be dedicated to selection of suitable target reliability index (Pnc) and move backwards to find the corresponding design features associated with that risk level. Now that the relationship between Pnc and collisions has been identified, code calibration can include useful elements of cost-benefit analysis.     101  Bibliography AASHTO. 2004. A Policy on Geometric Design of Highways and Streets, Washington, DC. Aitkin, M., D. Anderson, B. Francis, and J. Hinde. 1989. Statistical Modeling in GLIM, Oxford University Press, New York. Aguero-Valverde, J. and P.P., Jovanis. 2006. Spatial analysis of fatal and injury crashes in Pennsylvania. Accident Analysis and Prevention, Vol. 38, pp. 618–625. Aguero-Valverde, J. and P.P., Jovanis. 2008. Traffic Analysis of Road Crash Frequency Using Spatial Models. The 87th Annual Meeting of the Transportation Research Board, Washington, D.C. Ang, A.H-S., and W.H. Tang. 1975. Probability Concepts in Engineering Planning and Design, Vol 1, Basic Principles, John Wiley, New York. Bedrick, E.J., R. Christensen, and W., Johnson. 1996. A New Perspective on Priors for Generalized Linear Models. Journal of the American Statistical Association, Vol. 91, pp. 1450-1460. Black, W.R. and I., Thomas. 1998. Accidents on Belgium’s motorways: a network autocorrelation analysis. Journal of Transport Geography, Vol. 6, No. 1, pp. 23-31. Blischke, W. R. and D. N. P. Murthy. 2000. Reliability: Modeling, Prediction and Optimization. New York. John Wiley & Sons, Inc.  102  Bonnans, J.F., J. C. Gilbert, C. Lemarechal, and C. A. Sagastizabal. 2009. Numerical Optimization: Theoretical and Practical Aspects (Second Edition). Springer, series. Brooks, S.P. and A., Gelman. 1998. Alternative methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, Vol.7, pp. 434-455. Cameron, A.C. and Trivedi, P.K., 1998. Regression Analysis of Count Data. Econometric Society Monographs 30, New York: Cambridge University Press. Congdon, P.  2006. Bayesian Statistical Modeling. Wiley: New York, 2nd edition. Crowel, B. N. 1989. Highway Design Standards – Their Formulation, Interpretation, and Application. In proceedings of Seminar NPTRC Education and Research Services, London. de Leur, P., 2001. Improved approaches to manage road safety infrastructure. Ph.D. Dissertation. Department of Civil Engineering, University of British Columbia, Vancouver, B.C., Canada. de Solminihac, H.E., T. Echaveguren, and S. Vargas. 2007. Friction Reliability Criteria Applied to Horizontal Curve Design of Low-Volume Roads. Transportation Research Record: Journal of the Transportation Research Board, No. 1989, Transportation Research Board of the National Academies, Washington, D.C., pp. 138-147. Ditlevsen, O. and H. O. Madsen. 2007. Structural Reliability Methods. Coastal, Maritime and Structural Engineering, Department of Mechanical Engineering. Denmark. pp. 13-34. 103  Easa, S.M. (1993). Reliability-Based Design of Intergreen Interval at Traffic Signals. Journal of Transportation Engineering, ASCE, Vol. 19 (2), pp. 255-271. Easa, S.M. (1994 a). Reliability-Based Design of Sight Distance at Railroad Grade Crossings, Transportation Research Part A, Vol. 28 (1), Pergamon Press plc, pp. 1-15. Easa, S. M. 2000. Reliability Approach to Intersection Sight Distance Design. Transportation Rsearch Record: Journal of Transportation Research Board, No.1701, Transportation Research Board of the National Academies, Washington, D.C., pp. 42-52. Easa, S. M., and Q.C. You. 2009. Collision Prediction Models for Three-Dimensional Two-Lane Highways: Horizontal Curves. Transportation Research Record: Journal of the Transportation Research Board, No. 2092, Transportation Research Board of the National Academies, Washington, D.C., pp. 48-56. Echaveguren, T., M. Bustos, and H. de Solminihac. 2005. Assessment of Horizontal Curves of an Existing Road using Reliability Concepts. Canadian Journal of Civil Engineering, Vol. 32, No. 6, pp. 1030-1038. El-Basyouny, K. and Sayed, T. 2006. Comparison of Two Negative Binomial Regression Techniques in Developing Accident Prediction Models. Transportation Research Record. 1950, pp. 9-16. El-Basyouny, K. and Sayed, T. (2009a). Collision Prediction Models using Multivariate Poisson-lognormal Regression. Accident Analysis and Prevention. 41, pp. 820-828. 104  El-Basyouny, K. and Sayed, T. (2009b). Accident Prediction Models with Random Corridor Parameters. Accident Analysis and Prevention. 41, pp. 1118-1123. El-Basyouny, K. and Sayed, T. (2009c). Urban Arterial Accident Prediction Models with Spatial Effects. Transportation Research Record: Journal of the Transportation Research Board. 2102, pp. 27-33. El-Basyouny, K. and Sayed, T. (2010a). Application of Generalized Link Functions in Developing Accident Prediction Models. Safety Science. 48, pp. 410-416. El-Basyouny, K. and Sayed, T. (2010b). Safety Performance Functions with Measurement Errors in Traffic Volume. Safety Science. 48, pp. 1339-1344. El-Basyouny, K. and Sayed, T. (2010c). A Method to Account for Outliers in the Development of Accident Prediction Models. Accident Analysis and Prevention. 42, pp. 1266–1272. El-Basyouny, K. and Sayed, T. (2011). A Multivariate Intervention Model with Random Parameters among Matched Pairs for Before-After Safety Evaluation. Accident Analysis and Prevention. 43, 87–94. El-Khoury, J. and A. Hobeika. 2007. Incorporating Uncertainty into the Estimation of the Passing Sight Distance Requirements. Computer-aided Civil and Infrastructure Engineering, Vol. 22, pp. 347-357. 105  Ellingwood, B.R., T.V. Galambos, J.G. MacGregor, and C.A. Cornell. 1980. Development of probability-based load criterion for American National Standard. A58. Special Publication 577, National Bureau of Standards, Washington. Faber, M. 2006. Risk and Safety in Civil, Surveying and Environmental Engineering. Swiss Federal Institute of Technology. Switzerland.  Faghri, A., and M.J. Demetsky. 1988. Reliability and Risk Assessment in the Prediction of Hazards at Rail-highway Grade Crossing. Transportation Research Record 1160: Journal of the Transportation Research Board, Transportation Research Board of the National Academies, Washington, D.C. pp. 45-51. Fambro, D. Fitzpatrick K., and Koppa, R. 1997. Determination of Stopping Sight Distance. NCHRP 19 Report 400, National Research Council, Washington, D.C..  FHWA. 2009. Safety Effects of Cross-Section Design on Rural Multilane Highways. Summary Report, FHWA, U.S. Department of Transportation. Francis, B., Green, M. and Payne, C., 1993. The GLIM System: Release 4 Manual. Clarendon Press, Oxford. Gockenbach, M. S. 2003. Introduction to Sequential Quadtratic Programming. Pp 1 to 7.  Goldstein, H. 1995. Multilevel Statistical Models. London: Arnold. 106  Goldstein, H., J. Rasbash, I. Plewis, D. Draper, W. Browne, M. Yang, G. Woodhouse, and H. Healy. 1998. A User’s Guide to MLwiN. London: Institute of Education. Hadayeghi, A. 2009. Use of Advanced Techniques to Estimate Zonal Level Safety Planning Models and Examine their Temporal Transferability. PhD. Thesis. University of Toronto. Hadi, M. A., J. Aruldhas, L. Chow and J. A. Wattleworth. 1995. Estimating Safety Effects of Cross-Section Design for Various Highway Types Using Negative Binomial Regression. Transportation Research Record: Journal of the Transportation Research Board, No. 1500, Transportation Research Board of the National Academies, Washington, D.C., pp. 169-177.  Harwood, D., E.R. Kohlman Rabbani, K.R. Richard, H.W. McGee, G.L. Gittings. 2003. NCHRP Report 486: System-wide Impacts of Safety & Traffic Operations Design Decisions for 3R Projects. TRB, Washington D.C. Hauer, E., J.C.N. Ng, and J. Lovell. 1988. Estimation of Safety at Signalized Intersections. Transportation Research Record, 1185, pp. 48-61. Hauer, E. 1997. Observational Before-After Studies in Road Safety: Estimating the Effect of Highway and Traffic Engineering Measures on Road Safety. Elsevier Science Ltd., Amsterdam, Netherlands. 107  Hauer, E. 1999. Safety in Geometric Design Standards I: Three Anecdotes. Proceedings of the 2nd International Symposium of Highway Geometric Design. R. Krammes and W. Brillon, eds. Forshungsgeselschaft fur Strassen und Verkehrsvesen e.V., Koln, 11–23. Haukaas, T. Engineering Decision Making with Numerical Simulation Models. Vancouver, B.C., Canada, 2007. Heydecker, B.G. and Wu, J., 2001. Identification of sites for accident remedial work by Bayesian statistical methods: an example of uncertain inference. Advances in Engineering Software. 32, 859-869. Hurtado, J. E. 2004. Structural Reliability: Statistical Learning Perspectives. Berlin. Springer. Pp. 4-13. Ibrahim, J.G., and M.-H., Chen. 2000. Power Prior Distributions for Regression Models. Statistical Science, Vol. 15, No. 1, pp. 46–60. Islam, M.N., and Seneviratne, P.N. 1994. Evaluation of Design Consistency on Two-lane Highways. ITE Journal, Vol. 64. No. 2. pp. 28–31. Ismail, K., and T. Sayed. 2009. A Risk-based Framework for Accommodating Uncertainty in Highway Geometric Design. Canadian Journal of Civil Engineering, Vol. 36, No. 5, pp. 743-753. Ismail, K., and T. Sayed. 2010. Risk-Based Highway Design: Case Studies from British Columbia, Canada. Transportation Research Record, Journal of the Transportation 108  Research Board, No. 10-0185, Transportation Research Board of the National Academies, Washington, D.C. Jha, M., and P. Schonfeld. 2000. Integrating Genetic Algorithms and GIS to Optimize Highway Alignments. Transportation Research Record, Journal of the Transportation Research Board, No. 1719, pp. 233-240. Joshua, C. and N.J. Garber. 1990. Estimating truck accident rate and involvements using linear and Poisson regression models. Transportation Planning and Technology. Vol. 15, pp. 41–58. Jovanis, P.P., and H. Chang. 1986. Modeling the Relationship of Accidents to Miles Traveled. Transportation Research Record: Journal of the Transportation Research Board, No. 1068, Transportation Research Board of the National Academies, Washington, D.C., pp. 42-51. Kanellaidis, G., J. Golias, and S. Efstathiadis. 1990. Driver's speed behavior on rural road curves. Traffic Engineering and Control, Vol. 31 No. (7/8): pp. 414–415. Kim, H., D. Sun, and R.K. Tsutakawa. 2002. Lognormal vs. gamma: extra variations. Biometrical Journal, Vol. 44, No. 3, pp. 305–323.  Kulmala, R., and M. Roine. 1988. Accident Prediction Models for Two-Lane Roads in Finland. Technical Research Centre of Finland, Traffic Safety Theory and Research Methods, Session 4, Statistical Analysis and Models. Amsterdam. 109  Kulmala, R. 1995. Safety at Rural Three-and Four-arm Junctions. Development of Accident Prediction Models. Technical Research Centre of Finland, VTT 233, Espoo. Langford, I.H., A.H. Leyland, J. Rasbash, and H., Goldstein. 1999. Multilevel modeling of the geographical distributions of diseases. Applied Statistics, Vol. 48, No. 2, pp. 253-268. Lamm, R., Choueiri, E.M., and Hayward, J.C. 1988. Tangent as an independent design element. Transportation Research Record: Journal of the Transportation Research Board, No. 1195, Transportation Research Board of the National Academies, Washington, D.C., pp. 123-131. Lamm, R., B. Psarianos, and T. Mailaender. 1999. Highway Design and Traffic Safety Engineering Handbook. McGraw-Hill Inc., New York. Lane, P.W., Galwey, N.W. and Alvey, N.G., 1988. GENSTAT 5: an Introduction. Oxford University Press, Oxford. Lee, J., D. Park, D. Nam. 2005. Analyzing the Relationship Between Grade Crossing Elements and Accidents. Journal of the Eastern Asia Society for Transportation Studies. Vol. 6, pp. 3658-3668. Lerner, N. 1995. Age and Driver Perception-Reaction time for Sight Distance Design Requirements. ITE compendium of Technical Papers, Institute of Transportation Engineers, pp. 624- 628.  110  Levine, N., K.E. Kim, and L.H., Nitz. 1995a. Spatial analysis of Honolulu motor vehicle crashes (I): Spatial patterns. Accident Analysis and Prevention, Vol. 27, No. 5, pp. 663–674. Levine, N., K.E. Kim, and L.H., Nitz. 1995b. Spatial analysis of Honolulu motor vehicle crashes (II): Zonal generators. Accident Analysis and Prevention, Vol. 27, No. 5, pp. 675–685. Lord, D., S.P. Washington, and J.N. Ivan. 2005. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory. Accident Analysis and Prevention, Vol. 37, pp. 35-46. Lord, D. and Miranda-Moreno, L.F., 2008. Effects of low sample mean values and small sample sizes on the parameter estimation of hierarchical Poisson models for motor vehicle crashes: a Bayesian perspective. Safety Science. Vol. 46, pp. 751-770. Lovegrove, G.R., 2006. Community-Based, Macro-level Collision Prediction Models. Ph.D. Dissertation. Department of Civil Engineering, University of British Columbia, Vancouver, B.C., Canada. Lovelace, A.M.  (1972). Air Force/Industry Manufacturing Cost Reduction Study. FRL-ML-WP-TR-1998-4131, Materials and Manufacturing Technology Directorate, Air Force Research Laboratory. Lunn, D.J., Thomas, A., Best, N. and Spiegelhalter, D., 2000. WinBUGS - a Bayesian modeling framework: concepts, structure, and extensibility. Statistics and Computing, 10, 325-337. 111  Mayer, M. 1926. Die Sicherheit der Bauwerke und ihre Berechnung nach Grenzkraeften anstatt nach zulaessigen Spanungen. Springer, Berlin (In German). MacNab, Y.C. 2004. Bayesian spatial and ecological models for small-area accident and injury analysis. Accident Analysis and Prevention, Vol. 36, No. 6, 1028–1091. Maher, M.J., and I. Summersgill. 1996. A Comprehensive Methodology for the Fitting of Predictive Accident Models. Accident Analysis and Prevention, Vol. 28, No. 3, pp. 281-296. Maycock, G., and R.D. Hall. 1984. Accidents at 4-arm Roundabouts. TRRL Laboratory Report 1120, UK Transport and Road Research Laboratory, Crowthorne, Berkshire, England. McCullagh, P., and J.A. Nelder. 1998. Generalized linear models, Chapman and Hall, New York. McGee, H. W., W. E. Hughes, K. Daily. 1995. NCHRP Report 374: Effect of Highway Standards on Safety. TRB. Washington D.C. Melchers, R.E. 1999. Structural Reliability Analysis and Probability, Wiley, Chilchester, New York. Miaou, S.P., P.S. Hu, T. Wright, S.C. Davis, and A. K. Rathi. 1991. Development of Relationships between Truck Accidents and Highway Geometric Design: Phase I, Technical Memorandum prepared by the Oak Ridge National Laboratory for the Federal Highway Administration. 112  Miaou, S.P., P.S. Hu, T. Wright, A.K. Rathi, and S.C. Davis. 1992. Relationship Between Truck Accidents and Highway Geometric Design: A Poisson Regression Approach. Transportation Research Record: Journal of the Transportation Research Board, No. 1376, Transportation Research Board of the National Academies, Washington, D.C., pp. 10-18. Miaou, S.P., and H. Lum. 1993. Modeling Vehicle Accidents and Highway Geometric Design Relationships. Accident Analysis and Prevention. Vol. 25, pp. 689-709. Miaou, S.P. 1994. The Relationship Between Truck Accidents and Geometric Design of Road Sections: Poisson versus Negative Binomial Regressions. Accident Analysis and Prevention. Vol. 26, pp. 471-482. Miaou, S.P. and D., Lord. 2003. Modeling Traffic Crash-flow Relationships for Intersections: Dispersion Parameter, Functional Form, and Bayes versus Empirical Bayes. Transportation Research Record, 1840, pp. 31-40. Miaou, S., Song, J.J. and B.K., Mallick. 2003. Roadway traffic crash mapping: a space–time modeling approach. Journal of Transportation and Statistics, Vol. 6, No.1, pp.33–57. Miaou, S., and J., Song. 2005. Bayesian ranking of sites for engineering safety improvements: Decision parameter, treatability concept, statistical criterion, and spatial dependence.  Accident Analysis and Prevention, Vol. 37, No. 4, pp. 699–720. 113  Milton, J. and F Mannering. 1998. The Relationship Among Highway Geometries, Traffic Related Elements and Moto Vehicle Accident Frequencies. Transportation. Vol. 25, pp. 395-413. Miranda-Moreno, L.F., Fu, L., Saccomanno, F.F. and Labbe, A., 2005. Alternative Risk Models for Ranking Locations for Safety Improvement. Transportation Research Record. 1908, pp. 1-8.  Miranda-Moreno, L.F., 2006. Statistical models and methods for identifying hazardous locations for safety improvements. Ph.D. Dissertation. Department of Civil Engineering, University of Waterloo, Waterloo, Ontario, Canada. Miranda-Moreno, L.F., D., Lord, and L. Fu. 2007. Evaluation of alternative hyper-priors for Bayesian road safety analysis. Presented at the 87th Annual Meeting of the Transportation Research Board, Washington, D.C. Morrall, J., and Talarico, R.J. 1994. Side friction demanded and margins of safety on horizontal curves. Transportation Research Record: Journal of the Transportation Research Board, No. 1435, Transportation Research Board of the National Academies, Washington, D.C., pp. 145-152.  Moyer, R.A. and Berry, D.S. 1940. Marking Highway Curves with Safe Speed Indications. Highway Research Board Proceedings, Vol. 20, 399-428. 114  MUTCD. 2009. Manual on Uniform Traffic Control Devices: for Streets and Highways. FHWA. Washington, D.C. Navin, F.P.D. 1990. Safety Factors for Road Design: Can They be Estimated? Transportation Research Record 1280: Journal of the Transportation Research Board, National Research Council, Washington, D.C. pp. 181-189 Navin, F.P.D. 1991. Safe Road Design as Limit State. In Proceedings of the Conference, Strategic Highway Research Program and Traffic Safety on Two Continents, Part Two, VTI Rapport 372A, Part 2. Nelson, W. 1982. Applied Life Data Analysis. New Jersey. John Wiley & Sons, Inc. pp. 2-5 Nicholson, A. 1999. Analysis of spatial distributions of accidents, Safety Science, Vol. 31, pp. 71-91. O’Conner, P. D. T. 2002. Practical Reliability Engineering. Chichester. John Wiley & Sons, Inc. Ottesen, J.L. and R. A. Krammes. 2000. Speed-profile model for a design-consistency evaluation procedure in the United States. Transportation Research Record: Journal of the Transportation Research Board, No. 1701, Transportation Research Board of the National Academies, Washington, D.C., pp. 76-85. Ozbay, K., O. Yanmaz-Tuzel, S.V. Ukkusure, and B. Bartin. 2009. Safety Comparison of Roadway Design Elements on Urban Collectors with Access. Publication FHWA-NJ-2009-008, FHWA, U.S. Department of Transportation. 115  Park, Y-J., and F. Saccomanno. 2005. A Structured Model for Evaluating Countermeasures at Highway-railway Grade Crossings. Canadian Journal of Civil Engineering, Vol. 32, No. 4, pp. 627-635. Rengarasu, T.M., T. Hagiwara, and M. Hirasawa. 2009. Effects of Road Geometry and Cross-Section Variables on Traffic Accidents. Transportation Research Record: Journal of the Transportation Research Board, No. 2102, Transportation Research Board of the National Academies, Washington, D.C., pp. 34-42. Richl, L., and T. Sayed. 2006. Evaluating the Safety Risk of Narrow Medians Using Reliability Analysis. Journal of Transportation Engineering, American Society of Civil Engineers. Vol. 132, No. 5, pp. 366-375. Rt. 2010. A User’s Guide, Vancouver, Canada. Sarhan, M., and Y. Hassan. 2008. Three Dimensional, Probabilistic Highway Design: Sight Distance Application. Transportation Research Record: Journal of the Transportation Research Board , No. 2060, Transportation Research Board of the National Academies, Washington, D.C., pp. 10-18. Sarhan, M. and Y. Hassan. 2009. Reliability-Based Methodology to Calculate Lateral Clearance on Three-Dimensional Alignment. TRB 87th Annual Meeting Compendium of Papers, Transportation Research Board of National Academies, Washington, DC. 116  Saccomanno, F.F., and C. Buyco. 1988. Generalized Loglinear Models of Truck Accident Rates.  TRB 67th Annual Meeting Compendium of Papers, Transportation Research Board of National Academies, Washington, DC. SAS Institute Inc. 2002. Version 9.1, PROC GENMOD, Cary, USA. Sawalha, Z., and T. Sayed.  2001. Evaluating Safety of Urban Arterial Roadways.  Journal of Transportation Engineering: American Society of Civil Engineers, Vol. 127, No. 2, pp. 151-158.   Sawalha, Z. and T. Sayed. 2006. Traffic Accident Modeling: Some Statistical Issues. Canadian Journal of Civil Engineers. Vol. 33, pp. 1115-1124. Schluter, P.J., J.J. Deely, and A.J. Nicholson. 1997. Ranking and Selecting Motor Vehicle Accident Sites by Using a Hierarchical Bayesian Model. The Statistician, Vol. 46, No. 3, pp. 293–316. Shankar, V., F. Mannering, and W. Barfield.  1995. Effect of Roadway Geometrics and Environmental Factors on Rural Freeway Accident Frequencies. Accident Analysis and Prevention. Vol. 27, pp. 371–389. Shankar, V., J.C. Milton, and F. Mannering. 1997. Modeling Accident Frequencies as Zero-altered Probability Process: an empirical enquiry. Accident Analysis and Prevention. Vol. 29, pp. 829-837. 117  Shen, Q. 2007. Development of Safety Performance Functions for Empirical Bayes Estimation of Crash Reduction Factors. PhD. Thesis. Florida International University. Spiegelhalter, D., A. Thomas, N. Best, and D. Lunn. 2005. WinBUGS User Manual. MRC Biostatistics Unit, Cambridge. TAC. 1999.  TAC Geometric Design Guide for Canadian Roads. Transportation Association of Canada (TAC), Ottawa, Ontario. Tidwell, J. E., and J. B. Humphreys. 1970. Relation of Signalized Intersection Level of Service to Failure Rate and Average Individual Delay. In Highway Research Record 321, HRB, National Research Council, Washington, D.C., pp. 16–32. Transport Canada. 2007. Canadian Motor Vehicle Traffic Collision Statistics. Canadian Council of Motor Transport Administrators. Treat, J. R. 1980. A Study of Precrash Factors Involved in Traffic Accidents. Highway Safety Research Institute (HSRI), The HSRI Research Review 10, 6, 11, 1. USA. Vodden, K., Smith, D., Eaton, F. and Mayhew, D., 2007. Analysis and estimation of the social cost of motor vehicle collisions in Ontario: final report. Presented to Ministry of Transportation, August. Voight, A. P. 1996. An Evaluation of Alternative Horizontal Curve Design Approaches for Rural Two-Lane Highways. Research Report 04690-3. Texas Transportation Institute, Texas A & M University. 118  Wang, J., W.E. Hughes, and R. Steward. 1998. Safety Effects of Cross-Section Design for Rural, Four-Lane, Non-Freeway Highways. Report No. FHWA-RD-98-071. Federal Highway Administration, Washington, D.C. Wang, X. and M., Abdel-Aty. 2006. Temporal and spatial analysis of rear-end crashes at signalized intersections. Accident Analysis and Prevention, Vol. 38, No. 6, pp. 1137-1150. Washington, S., and  J. Oh. 2007. Bayesian Methodology Incorporating Expert Judgment for Ranking Countermeasure Effectiveness Under Uncertainty: Example applied to At Grade Railroad Crossings in Korea. Accident Analysis and Prevention, Vol. 38, pp. 234–247. Winkelmann, R., 2003. Econometric Analysis of Count Data. Springer, Germany. World Health Organization (WHO), 2004. World Report on Road Traffic Injury Prevention. Geneva. Zheng, Z. R. 1997. Application of Reliability Theory to Highway Geometric Design. PhD Thesis. University of British Columbia. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Country Views Downloads
Japan 11 0
United States 10 0
India 9 8
Canada 9 2
United Kingdom 6 0
Russia 4 0
China 4 0
Philippines 4 0
Germany 3 16
France 2 1
Greece 2 0
Vietnam 2 0
Colombia 2 0
City Views Downloads
Unknown 30 23
Tokyo 11 0
Ashburn 7 0
Kharagpur 4 4
Lower Sacvkille 4 0
Edmonton 3 2
Atlanta 2 0
Beijing 2 0
Hanoi 2 0
Shenzhen 2 0
Hyderabad 2 4
Sarnia 1 0
Corpe 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0063047/manifest

Comment

Related Items