Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Development of a new automatic incident detection system for freeways using a bi-classifier approach Razavi, Abdolmehdi 1998

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1998-27232X.pdf [ 18.33MB ]
JSON: 831-1.0088688.json
JSON-LD: 831-1.0088688-ld.json
RDF/XML (Pretty): 831-1.0088688-rdf.xml
RDF/JSON: 831-1.0088688-rdf.json
Turtle: 831-1.0088688-turtle.txt
N-Triples: 831-1.0088688-rdf-ntriples.txt
Original Record: 831-1.0088688-source.json
Full Text

Full Text

D E V E L O P M E N T O F A N E W A U T O M A T I C INCIDENT D E T E C T I O N S Y S T E M F O R F R E E W A Y S USING A BI-CLASSIFIER A P P R O A C H by  Abdolmehdi Razavi B . S c , Shiraz University, Iran, 1986 M . S c . , Shiraz University, Iran, 1988  A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T OF THE REQUIREMENTS FOR T H E DEGREE OF  DOCTOR OF PHILOSOPHY in T H E F A C U L T Y OF G R A D U A T E STUDIES D E P A R T M E N T OF M E C H A N I C A L ENGINEERING  W e accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH C O L U M B I A A p r i l 1998 © Abdolmehdi Razavi, 1998  In presenting this thesis in a partial fulfillment o f the requirements for an advanced degree at the University o f British Columbia, I agree that the library shall make it freely available for reference and study. I further agree that permission for extensive copying o f this thesis for scholarly purpose may be granted by the Head o f my Department or by his or her representatives. It is understood that Copying or publication o f this thesis for financial gain shall not be allowed without my written permission.  Department o f Mechanical Engineering The University o f British Columbia 2324 main M a l l Vancouver, B . C . Canada V 6 T 1Z4  April 1998  ABSTRACT  A s high as 60 to 70% o f the traffic delay experienced by motorists in North America is attributed to traffic incidents. M u c h o f this delay is caused by vehicle accidents, vehicle stalls, and other obstructions. A substantial reduction in delay can be achieved by early detection o f the incidents that cause it and a prompt response to divert the traffic in the upstream flow. Since the late 60s, Automatic Incident Detection ( A I D ) systems have been developed and implemented to help traffic management authorities. However, high false alarm rates and/or poor performance o f the adopted A I D systems have caused some authorities to abandon them. The research presented in this thesis discusses the development and assessment o f a new A I D system.  Often after the occurrence o f an incident, its "news" travels upstream and downstream through the traffic by means o f two waves. Because o f some practical difficulties, the information carried by the wave traveling downstream is overlooked by most researchers in this area. In this thesis, it is proposed that through an effective use o f the information carried by this wave, it is possible to significantly improve the performance o f an A I D system. The proposed U B C A I D system exploits the information carried by each wave independently and overcomes many o f the practical difficulties by adopting a new and unique architecture. The designed architecture not only demonstrates a better performance but also has the ability to maintain the performance over a wide range o f operating conditions.  ii  Geometric and operational data from a stretch o f the Trans-Canada Highway was used to develop a simulation model. This provided a very large set o f simulated data under both incident and incident-free situations. Furthermore, it provided an opportunity to examine the performance and robustness o f the A I D systems over a wide range o f geometric and operational conditions. The comparison o f the U B C A I D method with two other existing and "in-use" systems showed that it is possible to reduce the detection time by about 40% while staying within the desired range o f false alarm rates. It was also possible to increase the number o f incidents detected within the first few minutes after their occurrence by as much as 2-3 fold.  iii  TABLE OF CONTENTS  Abstract Table o f Contents List o f T a b l e s _ List o f Figures Acknowledgement  iv _____ vi vii ix  Chapter 1 Introduction  1  Chapter 2 Background  4  2.1 Incidents  4  2.2 Traffic F l o w Variables and Sensors  5  2.3 Incident Patterns  12  2.4 Incident-Like Traffic Patterns  15  2.5 Performance Measures  16  2.6 Calibration  18  Chapter 3 Existing A I D Methods and Practices  23  3.1 Literature Survey o f Existing A I D Algorithms  23  3.2 A I D Practices  57  3.3 General Observations and Comments  64  Chapter 4 Methodology o f the Study  71  Chapter 5 Simulation o f Traffic F l o w Under Incident and Incident-free Conditions _  76  5.1 Simulation Program  76  5.2 Simulation M o d e l for This Study  79  iv  Chapter 6 Data Sets  94  :  6.1 First Series o f Data Sets  94  6.2 Second Series o f Data Sets  97  6.3 Data Set Names and Structures  98  6.4 Effects o f the Random Number Seed  100  Chapter 7 Development o f The U B C A I D System  101  7.1 Shock Waves and Expansion Waves  101  7.2 Basic U B C System  114  7.3 Final F o r m o f the U B C A I D System  135  Chapter 8 Discussion o f Results  150  8.1 Comparison o f Performances  150  8.2 Detailed Performances  157  Chapter 9 Conclusions and Further Research  .  171  9.1 Conclusions  171  9.2 Further Research  172  Bibliography  175  v  LIST OF TABLES  Table 2-1 - Description o f Incident-Like Traffic Events that Cause False Alarms (summarized from Chassiakos, 1992)  15  Table 2-2 - Definition o f the Performance Measures  16  Table 3-1 - Characteristics o f the 10 Versions o f the California Algorithms (reproduced from Tignor and Payne, 1977)  27  Table 3-2 - Assessment Procedure in Stage#2 o f McMaster Algorithm for Stations Where Recurrent Congestion M a y Occur (from Hall et al., 1993)  45  Table 5-1 - Geometric Parameters Used in the Simulation M o d e l  85  Table 6-1 - Traffic Volumes used in the Simulation Data Sets  97  Table 7-1 - The Advantages and Disadvantages o f Using Shock and Expansion Waves for an A I D System  Ill  vi  LIST OF FIGURES  Figure 2.1. A n Example of Probability Distribution Function o f the Control Variable under Incident and Incident-free Condition  19  Figure 2.2. Examples for pdf o f the Control Variable Showing a) Proper Choice o f Control Variable b) Poor Choices o f the Control Variable  20  Figure 3.1. Decision Tree for an Equivalent Form of California Algorithm#8 (reproduced from Payne and Tignor, 1978)  29  Figure 3.2. Conceptualization o f Traffic Operations on a Catastrophe Theory Surface (reproduced from Persaud and Hall, 1989)  42  Figure 3.3. Flow-Occupancy Template for McMaster Algorithm (Hall et al, 1993)  43  Figure 3.4. Input and Output Features of the M L F used by Ritchie and Cheu, (1993)  48  Figure 5.1. Selected Study Site along Trans-Canada Highway  81  Figure 5.2. Link-Node Diagram and Highway Geometry for the Study Site  84  Figure 5.3. Variation o f Occupancy along the Freeway for Incident-Free Cases  91  Figure 7.1. Fundamental F l o w Diagram and Effects o f Incident  102  Figure 7.2. Effects o f an Incident on Traffic F l o w Variables  105  Figure 7.3. Effects o f a) Demand and b) Incident Severity on Shock and Expansion Waves  108  Figure 7.4. Variations o f the Lane Occupancy for a Typical Lane Blocking Incident  109  Figure 7.5. Occupancy Variations at First Upstream and Downstream Stations for a Typical Incident  110  Figure 7.6. General Scheme o f the Proposed System  115  Figure 7.7. The First Prototype o f the U B C A I D System  116  Figure 7.8. Some Examples from Detectors' Readings  126  Figure 7.9. Operating Characteristic Curves of Various A I D methods for the Simulated Data - " X X X A X X " Series, Lane Closure Incidents  133  Figure 7.10. Proposed U B C A I D System  136  Figure 8.1. Comparison o f the D R and A D T as a Function o f F A R (XXXAXX)  151 vii  Figure 8.2. Comparison o f the number o f Incidents Detected as a Function o f Time Elapsed After the Onset of Incident ( X X X A X X e )  154  Figure 8.3. Comparison of the A D T as a Function o f Time Elapsed after the Onset o f Incident ( X X X A X X e )  155  Figure 8.4. Comparison o f the D R as a Function o f Time Elapsed after the Onset o f Incident ( X X X A X X e )  156  Figure 8.5. Detection Rates as a Function o f Incident Zone, Incident Location, and Time of Day  159  Figure 8.6. Distribution o f Highest, Lowest, and Weighted Average Volumes of the Study Site as a Function of Time of Day  161  Figure 8.7. Distribution o f False Alarm Rates Experienced in Various Zones and Times o f Day  163  Figure 8.8. Average Detection Time as a Function of Location and Time o f day  165  Figure 8.9. Number o f Incidents Detected and Their A D T for Each Classifier and Location within a Zone  167  Figure 8.10. Percentage Contribution o f the T w o Classifiers to Incident Detection  169  Figure 8.11. Percentage of False Alarms caused by each Classifier  170  ACKNOWLEDGEMENT  M y sincere gratitude is expressed to my supervisors Professor F. Sassani and Professor F. Navin for their invaluable advice and guidance during so many meetings that we had. Moreover, the fact that their support went beyond academic matters is deeply appreciated. I would also like to thank the members o f my supervisory committee, Professor C . de Silva, D r . T. Sayed, and Professor K . Bury, for their useful inputs and remarks. M y appreciation is also extended to my external examiner, Professor F. Hall from McMaster University and my university examiners, Professor D . Cherchas and Professor G . Dumont who carefully read the thesis and provided me with their helpful comments.  This project was mainly funded by the British Columbia Ministry o f Transportation and Highways. I wish to thank M r . Kitasaka, M r . Miska, and Dr. Zhou o f the Ministry for their suggestions and support during the course o f this project. I also acknowledge the invaluable help o f the Ministry Library Resource Centre in obtaining much o f the literature. I would also like to thank M s . Lee and M r . Zhang who ran part o f the simulation and developed a program so that I could quickly plot and compare the detector signals. Special word o f thanks is also expressed to M s . Navin who kindly reviewed and edited this thesis.  M y words can not express my gratitude to my kind and caring wife, M a r i a who has always been there for me (even for my last minute preparations!), and to our son, A l i , who in a special way taught me other ways o f looking at things. ix  CHAPTER 1 INTRODUCTION  Traffic Congestion and its effects have become part o f everyday life in metropolitan areas. Congestion is divided into recurrent that exists during the peak periods, and non-recurrent that frequently occurs as a result o f incidents such as accidents, vehicle stalls, or maintenance activities. A n estimate by the United States Federal Highway Administration ( F H W A ) (Lindley, 1986) has reported incident-related congestion as causing up to 60% o f the total motorist's delay. This value is expected to grow up to 70% by the year 2005. This is because as freeways carry an ever-increasing volume o f traffic and operate at or near capacity for long periods o f time, more accidents will occur. Also, as increased maintenance and construction on aging freeway systems takes place, more lane closures will be necessary. Moreover, incident congestion is often unexpected by drivers and may lead to secondary accidents.  'To  reduce the negative effects caused by incidents, many transportation  agencies  are  implementing freeway incident management systems. Incident management is a coordinated and planned approach to restore a freeway to normal operation after an incident has occurred (Dudek and Ullman, 1992). Incident management has several components including: •  Detection o f the incident.  •  Verification and identification o f the nature of the incident.  •  Dispatch of the appropriate type o f response. 1  •  Provision o f the necessary  information for drivers about  the incident and  alternative routes. •  Implementation o f control strategies (such as ramp metering) to reduce demand upstream o f the incident.  Incident detection is the first essential component o f a traffic management system. Caltrans , 1  in a study, has determined that even under off-peak free-flow conditions, for each minute saved by early detection and removal o f an incident, at least 4-5 minutes will be cut from the delays. During the peak hour, a few minutes saved in restoring capacity can save hours in accumulated delay time (Roper, 1990).  There are various techniques for incident detection ranging from simple motorist call systems to electronic surveillance systems. Each technique has its advantages and  disadvantages  (Balke and Ullman, 1993). Calls from motorists using call boxes or cellular phones, highway patrols, and other "manual" means o f detection are used every day to report incidents to the traffic management centers. However, they are somewhat "spotty" in nature, and require observers to be in the right place at the right time. Often the incident is detected only after time has been lost and a problem o f considerable magnitude has already developed (Roper, 1990).  1  California Transportation Authority  2  The main advantage o f Automatic Incident Detection (ADD) techniques is that (at least potentially) they can overcome the deficiency o f the manual techniques. A I D systems, using real-time data coming from sensory stations spread along the freeways, are electronically "everywhere" at "all times".  Currently, no A I D system is being used in British Columbia. A I D has been identified as a key component in British Columbia Ministry o f Transportation and Highways' South Coast Region's Traffic Management Plan ( T M P ) . This thesis presents the results o f studies carried out as a research and development project on A I D systems for the British Columbia Ministry o f Transportation and Highways.  3  CHAPTER 2 BACKGROUND  In this chapter background information on traffic incidents and incident detection systems is presented. This includes definitions o f the terminology and a discussion o f the general concepts relevant to this study.  2.1 Incidents Incidents are defined as unusual non-recurrent events that temporarily reduce the roadway's effective capacity or increase traffic demand. Incidents, in a general sense, may be either predictable or unpredictable and include: •  Unpredictable - Accidents - Vehicle breakdowns - Roadside distractions, and - Spilled loads,  •  Predictable, - Major events (e.g., sport events), and - Construction and maintenance activities.  4  In this thesis, the term incident is only used to refer to the unpredictable occurrences.  2.2 Traffic Flow Variables and Sensors A s it will be discussed in the next chapter, apart from a few A I D methods that directly "see" the incident, the majority o f them detect the incidents based on observed or estimated traffic variables. In this section, the main variables used in traffic flow analysis are introduced and then the variables used in A I D systems and sensors are explained.  Characteristics o f traffic flow in a freeway may be described by variables that are very similar to their counterpart in fluid flow. The variables can be attributed either to individual vehicles or to the state o f traffic as a whole. The variables that define the state o f traffic are either "point measures" or "length measures". Point measures are defined for a specific point along the freeway. These variables have to be defined for a specified time, often by a simple averaging. The length measures, on the other hand, are defined for a section along the freeway.  F l o w rate (q) often referred to as "flow" or usually "volume" is simply defined using ( number o f vehicles that pass a certain point and (T ) the time o f counting as:  ) the  Volume is a point measure and is often expressed using "vehicles/hour". It may also be 2  defined for a single lane in which case the unit would be "vehicles/hour/lane".  The average speed o f the vehicles is defined in two ways, depending on the type o f "mean" that is used. The "space mean speed" or jT  s  is defined based on the average time taken by  vehicles passing a specific length. This is a length measure and is mathematically reduced to the harmonic mean o f the speed o f the individual vehicles or:  Us  ( >  =  2T  = ' 1 Ut in which 11 = speed o f vehicle z  "Time mean speed " or jT  t  is a point measure in which the speed o f the vehicles passing a  specific point within a time period is averaged. It is calculated as: N  in which U  2  = speed o f vehicle / (measured at a point)  In this thesis as in most of other AID literature, the term "volume" is used.  6  The units o f both mean speeds , like the speed o f individual vehicles are expressed in 3  "miles/hour" or "km/hour".  Another variable used in traffic flow theory is a length measure called "density". It is simply defined using the number o f vehicles ( N ) and (L ) length o f freeway on which those vehicles are spread as:  (2.4)  Due to difficulties in measurement o f density for on-line systems, a different variable that describes the degree o f the closeness o f the individual vehicles is defined. This point measure is called "occupancy" and is defined as the proportion o f the time that a point along the freeway is occupied by vehicles. This measure needs to be defined for a specified time period (T) and can be calculated as:  YJLL  (J)=  (2  .5)  in which, f = The time taken by vehicle / to pass over the point Occupancy is expressed as a percentage.  Time headway ( / j ) is another important point measure. Headway is the time separation o f (  one vehicle and the one following measured from a specific vehicle point (e.g., 3  In this thesis, hereafter speed refers to the time mean speed.  front  bumpers). Averaging this headway for a number o f vehicles passing during a specific time period gives the average time headway as:  A close look at the above formula shows that average headway is inverse o f traffic volume. I f just after a vehicle passes a point, one starts adding the headways, the numerator in this formula would be equal to total time required to pass the next ( N ) vehicles. However, when measuring the headway for pre-specified time periods, considering that the passage o f a vehicle would not necessarily coincide with the beginning or end o f the period, may lead to a situation where the above statement does not hold . In this case the product o f average 4  headway and volume (using a compatible set o f units) would be close to, but not necessarily equal to unity. The lower, the volume, the higher the deviation from unity could be. In traffic engineering handbooks, the headway corresponding to the first vehicle is defined such that the average headway remains the inverse o f volume . Depending on whether or not this point has 5  been considered in the calculations, the volume and average headway  6  may be treated as  dependent on or independent o f each other.  An exaggerated example for illustrative purpose could be presented as follows. If during a time period of 30 second two vehicles pass a reference point, the measured volume would be 2 vehicles/30 seconds (or 240 vehicles/hour). This obviously should correspond to 15 seconds/vehicle for average headway. However, if the vehicles pass 1 and 29 seconds after the start of the period, the average headway might be calculated as 28 seconds if only one headway is considered. It may also be any value, if the headway for the first vehicle and the one that has passed in the earlier time period is also included in the calculation. For the example presented in the footnote 4, the second headway is calculated by adding the first and last one-second of the period. This obviously leads to an average headway of 15 seconds/vehicle. In this thesis hereafter "headway" is used. 4  5  6  8  There are a number o f traffic models that discuss the relationship among the variables. These models are subjects o f traffic flow theories. Since none o f these models has been used in development o f the U B C A I D system, they are not discussed here. A general form o f these models will be briefly mentioned in section 7.1, when effects o f the incident on the traffic flow are discussed.  Although some length measures such as density could be very important when working on identification o f congestion and incidents, they can not be directly measured by ordinary surveillance systems . Therefore, their usage is impractical for the purpose o f incident 7  detection and point measures are used to identify the state o f traffic at certain points along the freeway.  There are several types o f sensors and detectors available for A I D systems. They include a wide range o f sensing technologies including, video cameras , infrared (Grewal, 1992) and 8  ultrasonic sensors, (Yagoda and Buchanan, 1991) and even radar (Roe, 1991). However, the most widely used sensors in North America employ a simple technology to detect the presence o f the vehicles. These sensors are called loop detectors. They use magnetic loops whose magnetic field changes depending on the presence or absence o f vehicles passing over them. A computer that interrogates the loop with a frequency o f a few hertz will receive a binary signal whose value shows whether or not the detection zone is occupied by a vehicle.  It is possible to estimate density using video surveillance systems, but such estimation is not used in AID systems. A number of video based sensors that have been more popular outside North America will be presented in section 3.1.13. 7  8  The detector sends a series o f " F ' s or "0"s accordingly. This signal is processed by the computer to calculate various traffic parameters averaged during a time interval. The time intervals may vary from 1 second to 20, 30, 60 seconds, or more. The updating time for each calculated average variable could also be any value, but in most cases is 30 seconds. In this thesis, it is assumed hereafter that the sensors are o f the presence detection types similar to loop detectors. Obviously, whatever information available through this type o f sensors is also available from other types o f sensors and this therefore does not impose a limitation for the U B C A I D system.  In almost all o f the A I D algorithms, occupancy is used as one o f the control variables. Many A I D methods such as the California algorithms, work only based on the observed occupancy. A s stated before, occupancy is a measure o f density, but unlike density, it can be obtained easily using loop detectors. It can be calculated as the ratio o f number o f " 1 " signals to the total number o f signals in a pre-defined averaging interval. The fact that both the vehicle and the  detection zone have  some non-zero  lengths  introduces  a variable bias to  the  measurements.  Traffic volume or flow rate is also used in many A I D systems either as a control variable, or in order to enable the system to distinguish among light, moderate, and heavy traffic conditions. Volume is also easily obtained using loop detectors as the product o f number o f state changes in the pre-defined interval, and a constant that depends on the interval duration. Sometimes the loop spans more than one lane as in some roads in U K , (Bell and Thancanamootoo, 1988) which leads to under-counting. This is due to the possibility that two vehicles occupy the 10  detection zone at the same time. This bias depends on the flow rate and therefore should be compensated accordingly.  In A I D systems, speed is an important parameter as it changes significantly in upstream o f incidents. Nevertheless, unlike occupancy and volume, speed cannot be directly measured by one loop detector. However, in some traffic control and surveillance systems "speed traps" are used to obtain the speed. Speed traps consist o f two loop detectors closely spaced along the same lane. The speed o f vehicles can be calculated by dividing the known distance between the detection zones by its travel time. The travel time is the delay between the passage o f the vehicles over the two detectors. T o use this technique twice as many detectors are required, which will add to the hardware cost. However, at least potentially, the additional information obtained may enhance the performance o f the system. Another way o f estimating the average speed is to calculate it as volume divided by occupancy multiplied by some calibration factor. This calibration factor depends on the mean vehicle length and detector size and is usually assumed to be constant. However, Hall and Persaud (1989) showed that this is not always true, particularly in a transition from non-congestion to congestion which happens after the occurrence o f an incident.  Traffic energy is also another variable that was sometimes used in the older A I D algorithms such as in standard normal deviate algorithm (Dudek et al, 191'4). The idea comes from an analogy between traffic flow and flow o f a compressible fluid. It is defined as the product o f speed and volume. Sometimes speed is not measured directly, and energy can be calculated as (Cook and Cleveland, 1974):  11  (2.7)  One may also find several other parameters that are used in the ADD algorithms, mostly a temporal or spatial difference o f the above parameters.  Some o f these parameters are  discussed in more detail in the Chapter 3, which discusses existing A I D algorithms.  2.3 Incident Patterns The impact o f the incidents on freeway operation depends on many factors including •  Frequency  •  Location  •  Severity  •  Time o f day  •  The level o f usage o f the facility  Incident management systems aim to reduce the congestion and delay caused by incidents. Therefore, they are more interested in early detection o f incidents that cause greater impact. If an incident occurs with no lane blockage in a light traffic condition, the incident may have little effect. O n the other hand, i f an incident blocks some lanes during peak demand period, the delay would be quite extensive.  12  The incidents that have greater impact on traffic also have a higher likelihood o f detection by ATP systems. A s it will be discussed in Chapter 3, most o f the incident detection techniques try to find an abnormal traffic pattern developed as a result o f an incident. Therefore, it is necessary to categorize these traffic patterns. This will help differentiating between the cases that are "easy-to-detect" and those that are almost "undetectable" by most o f the ADD systems.  Payne and Tignor (1978), divided the traffic pattern after the occurrence o f an incident depending on its nature and the traffic conditions at the onset o f the incident into the following five types: 1.  The capacity at the site of the incident is less than the volume of oncoming 9  traffic. This is the most distinctive incident pattern for incident detection algorithms. In this case, a queue develops upstream, while a region o f light traffic develops downstream. When traffic is flowing freely before the occurrence o f the incident, this pattern would be clearest. A typical case occurs as a result o f a severe accident that may block one or two lanes. 2.  The prevailing traffic condition is freely flowing and the impact of the incident is less severe. This pattern can be observed typically when an incident has been moved to the shoulder, or in a case o f one lane blockage for which the reduced capacity due to the blockage is still higher than the upstream demand. Other than small queues close to the incident location, the traffic pattern will not change very  9  Capacity refers to the highest possible volume for the road. Please see Figure 7.1.  13  much. Such cases may be missed by incident detection algorithms, depending on how far the incident location is from the neighboring detector station. 3. The prevailing traffic condition isfreelyflowingand the impact of the incident is not noticeable in the traffic data. A typical such case can be an incident during the night, or a stalled vehicle in the shoulder under l o w volume conditions. Such incidents do not cause queues and therefore there will almost be no observable effect on the traffic pattern, thus, ordinary incident detection algorithms are not expected to detect such incidents. 4. In a heavy traffic, the capacity at the incident site is less than volume of traffic downstream. This case happens in an already congested segment o f the freeway (such as the case o f a secondary accident a queue that has already developed by another incident). It causes a reduction in the demand downstream and hence clears the downstream. However, clearance o f downstream region usually happens slowly and there is minor or no effect on the upstream unless the accident is severe. Therefore some o f the incident detection algorithms might be able to detect it but after the situation has developed for a while. 5. In a heavy traffic, the capacity at the incident site is not less than volume of traffic downstream. This case is very similar to the fourth case, but because the capacity after the incident is more than the downstream volume, it has much less effect on the traffic pattern. Generally, the effects o f such an incident are local and are not expected to be detected  by incident detection methods unless the  downstream congestion diminishes for some other reasons.  14  2.4 Incident-Like Traffic Patterns Various traffic events may produce traffic disturbances similar to those o f incidents. Such events are the major sources o f false alarms by the A I D systems. These events and their description are given in table 2-1.  Table 2-1 - Description of Incident-Like Traffic Events that Cause False Alarms (summarized from Chassiakos, 1992) Description  Observed pattern  Bottlenecks (recurrent congestion)  They are formed where the freeway cross section changes in a lane drop or addition, entrance ramp with a substantial on-ramp traffic volume, and freeway interchanges.  L o n g lasting spatial density or occupancy difference between upstream and downstream stations  Traffic pulses (in uncongested flows)  These pulses are created by platoons o f cars moving downstream and may be caused by a large entrance ramp volume lasting for a finite duration.  A n increase in occupancy in the upstream station followed by a similar increase in the downstream occupancy  Compression waves  These waves occur in heavy, congested traffic, usually following a small disturbance and are associated with severe slow-down, speed-up vehicle speed cycles. Compression waves are the major sources o f false alarms.  Sudden, large increase in occupancy that propagates through the traffic stream in a direction counter to the traffic flow  Traffic  event  Random traffic They are frequently observed because o f the random nature o f fluctuations  Short-duration, usually not high peaks o f occupancy  traffic.  15  2,5 Performance Measures Generally, the performance of an automatic incident detection system is measured by three parameters. Table 2-2 presents the definitions for these performance measures.  Table 2-2 - Definition of the Performance Measures  Performance Measure  Definition  Detection Rates (DR)  Percentage of the incidents detected out of all incidents that occur during a specified time period.  False Alarm Rate (FAR)  Percentage of the false alarms out of all decisions made by the system during a specified time period . 10  Average Detection Time (ADT) or The average amount of time required by the system to Mean Time To Detect (MTTD) detect an incident after its occurrence . 11  As will be described later, the values of these performance measures of an algorithm are not fixed for a given traffic condition. They are often a function of the selected thresholds. Therefore, a better way to describe performance of an algorithm is to draw an operating characteristic curve. This curve shows the variation of the detection rate (DR) by changing false alarm rate (FAR).  As it was stated earlier, performances of the AID systems are usually measured by: detection rate, false alarm rate, and average detection time. However, these three measures are not Sometimes in off-line evaluations another definitions is used in which the denominator would only represent the incident-free part of data. On the other hand, Levin and Krause (1978), have defined FAR as the percentage of false incident messages out of all incident messages occurring during a specified time. Sometimes the starting time is defined as its apparent time from traffic data at upstream and downstream. This needs a subjective decision and moreover the resultedfigurewould be less than the actual value 1 0  11  16  independent o f one other. In most o f the cases a threshold or a set o f threshold values has to be defined for the algorithm that sets its sensitivity to change in traffic patterns. Higher sensitivity for an algorithm increases its ability o f detection o f the less severe incidents or those that have minor effect on the traffic flow. However, this also causes an increased risk o f producing an alarm when the change in traffic pattern is not due to an actual incident. Therefore, it is seen that detection rate and false alarm rate are directly related to one another. To compare various algorithms, it would therefore be logical to consider the variation o f detection rate as a function o f false alarm rate. The resulted curves are called operating characteristic curves.  In many o f the cases the abnormal traffic pattern vanishes within a couple o f minutes i f not caused by an incident. Therefore, persistence checks that are used by some algorithms can filter out such patterns. This provides the opportunity o f decreasing false alarm rate while maintaining the detection rate constant. However, persistence checks also increase the average detection time that in turn cause increased congestion due to late detection.  The choice o f the thresholds and delay time that determines D R , F A R and A D T are related to the policies o f the traffic management centers. It seems that from the operational point o f view, a certain maximum for F A R should be set beyond which, it would be impractical to respond to the alarms. This will be discussed further in Section 3.3.  17  2.6 Calibration The role of calibration can be explained by means of an example. Consider a method in which only one control variable is used and only one test against some threshold value is required. It can be assumed that a high value for the control variable represents an incident condition and a low value represents an incident-free flow. To give a physical sense to this example, the control variable can be assumed to be the spatial difference between the occupancy of two adjacent stations. This example is intentionally oversimplified to clarify some points about calibration, strength, performance, and robustness of the AID algorithms.  As discussed earlier, there are some incident-like traffic events that may produce similar changes in the traffic as most incidents do. On the other hand, based on incident type, location with respect to the detectors, road capacity and volume, some incidents may not produce significant changes in the measured traffic parameters.  Assuming a uniform traffic flow and an incident-free condition, the occupancy of all the detector stations should be the same. This should produce a zero value for the selected control variable. However, due to the random nature of the traffic flow parameters, the zero should be replaced with a random variable with a zero mean. On the other hand, after an incident occurs and its effects are sensed at the immediate detector stations, a significant difference between their occupancy should be experienced. Therefore, under incident condition, a positive value for the mean of the control variable can be assumed. Figure 2.1 shows the probability distribution functions (PDF) for the control variable under both incident and incident-free conditions. As thisfigureshows, the two PDFs always have some overlap. In this simple 18  example, a single test of the control variable against the threshold is needed. No matter where the threshold is placed, there will always be a probability of missing incidents or producing false alarms. The choice of threshold will only define the trade off between these rates. For proper calibration, either a knowledge of the PDF under both conditions is required, or a trial and error approach has to be employed.  threshold looser threshold, better DFL but higher FAR  ^ - tighter threshold, better FAFL but lower DR  u  o  c  C  D  u -a  "° £ c  °  u  u  probability of missing incidents probability of giving false alarm  — CJ  JS *-  .2 X k. c  TO (J  > -o ~ "3  incident conditi  dent dition  Control variable  Figure 2.1. A n Example of Probability Distribution Function of the Control Variable under Incident and Incident-free Condition  In this example, the proper choice of the control variable is such that the overlapping region is as small as possible (see Figure 2.2). A small overlap not only makes it more distinguishable, and, therefore, produces better performance, but it also decreases the sensitivity of the  19  performance to the calibration. T o produce the best performance for this method, it should be noted that the mean and variance o f these probability functions are not constant. N o t only are they functions o f the detector locations, and spacing, but also functions o f traffic, time, and weather condition. Therefore, ideally, a knowledge o f all o f these variations is required to optimally calibrate the algorithms and/or update the thresholds. I f all or.some o f these variations are ignored the two P D F s used will have a larger overlap because o f the more uncertainty involved.  Figure 2.2. Examples for pdf of the Control Variable Showing a) Proper Choice of Control Variable b) Poor Choices of the Control Variable  This example can now be generalized to show that the same problems exist in all o f the A I D methods. T o increase the performance and the degree that incident and incident-free conditions are "distinguishable" for the method, one may use more than one control variable (such as the California algorithm#l). Persistence checks used by some algorithms can also be assumed as additional control variables. Additional control variables increase the dimension o f 20  the above visual representations o f the method. I f we use two control variables along with two thresholds, we can still visualize the probability density functions as two 3 D Gaussian surfaces whose overlap defines a region in 3 D space. Although functions o f more than two variables are difficult to visualize, one may continue to use the same concepts from Figure 2.1 and Figure 2.2. Obviously, in such cases the abscissa does not represent any specific control variable, but rather a generalization o f several variables. Since the horizontal axis does not represent any specific variable, position o f the threshold does not have a physical meaning. But, the movement o f the threshold to the left or right can still be assumed to loosen or tighten the conditions respectively. The size o f the overlap region still represents how distinguishable the two conditions are for the method.  The indirect A I D algorithms differ in the number, and the type o f the control variables. They also differ in the pre-processing o f control variables, how the variables are used, and their level o f sophistication. The process can be quite visible to the user or be done in a "black box" form. Sometimes the choice o f thresholds may not be clear to the user, but still the concept might be applicable. A s an example o f the latter, the S N D  1 2  and A R I M A  1 3  algorithms have  been described as dynamically updating their thresholds. However, even in this case, the number o f standard deviations that form the confidence interval plays the same role as our generalized threshold does in the example.  1 2  1 3  SND algorithm is described in section 3.1.3. ARIMA algorithm is described in section 3.1.7.  21  The size o f the overlap region, that is defined primarily by the choice o f control variables and the logic, defines the ideal expected performance or strength o f the A I D method. The actual performance o f the system, as was stated earlier, depends on the choice o f thresholds or parameters, whether they are general or location specific, and whether they are updated according to the changing conditions. A robust A I D system should be able to perform well in a wide range o f traffic conditions. O n the other hand, generally, the required knowledge for an ideal calibration does not exist. Usually only one set o f thresholds is used for all locations, and except in a few cases, no updating mechanism is used. Therefore, the performance o f the system at any time and any location, depends very much on how close the system is to its optimal condition.  22  CHAPTER 3 EXISTING AID M E T H O D S AND PRACTICES  In this chapter, first a review o f A I D literature is presented to explore the logic used by the existing A I D methods. Then, in section 3.2, A I D practices o f a number o f North American traffic authorities will be discussed. A discussion o f the general findings will also be presented in section 3.3.  3.1 Literature Survey of Existing AID Algorithms During the past three decades, there has been extensive research efforts to develop or enhance A I D methods. In a literature search, about forty research groups were identified (Razavi, 1995). The A I D methods can be categorized in several ways. In this thesis, they are divided into two categories as "indirect methods" and "direct methods" . 14  The majority o f the A I D methods are o f the first category, which "indirectly" detect incidents based on their impact on the traffic flow. They do so by recognizing unexpected changes in the traffic data measured by the sensors. These measurements can be done using different sensor technology, but the most popular choice in North America is inductive loop presence detectors. They provide a cheaper but slower incident detection system. Slower, because no  No specific terminology for these methods was found in the literature. These names were selected for the purpose of this thesis. 1 4  23  matter what algorithm is used, there will always be a lost time until the effects o f the incidents arrive to the sensor locations . Another minor drawback is that most o f these A I D methods 15  can not detect the incidents in light traffic because there is almost no change in the traffic parameters other than near the incident. For the same reason, detection o f incidents in light traffic is o f a l o w priority for traffic authorities.  "Direct methods" refers to a few methods that use image processing techniques to detect stopped vehicles by interpreting the scene image and finding the stopped vehicles. These methods actually "see" the incident rather than detecting it through its effects. Potentially they can be much faster than the first category in detecting the incidents. These methods also have good performance in detecting the incidents in light traffic conditions. However, they may also need closer spacing between detecting stations (cameras), which makes them more costly to provide sufficient coverage. Environmental conditions may also affect the performance o f these methods. This category will be briefly reviewed in section 3.1.11.  There has also been some research activities in development o f A I D algorithms that do not belong to the "mainstream" methods. They include: •  Methods designed for arterial streets;  •  Methods targeting detection o f incidents in light traffic; and  •  Methods that use less conventional technology.  The UBC method presented here is of this category. However, as it will be discussed in Chapter 7, it has the potential to substantially reduce this time lag. It does so by targeting the effects that can be sensed much earlier than those targeted by other "indirect methods". 15  24  For the sake o f completeness, these methods are also mentioned in section 3.1.14. However, the most important group o f "indirect methods" are those that are often designed for medium level o f traffic volume in freeways and use data from presence detectors. Most o f these are presented in sections 3.1.1 to 3.1.12. Several methods that are o f greater importance in this thesis will be discussed in some detail. Their reported performance measures, however, will not be discussed. This is because, as it will be seen later, these measures are not necessarily compatible and comparisons made based on different sources could be misleading.  3.1.1 California Algorithms The California algorithms (Tignor and Payne, 1977) are some o f the earliest incident detection methods and are still widely used. These methods were developed by the  California  Department o f Transportation to be used in the L o s Angeles freeway surveillance systems. In the original California algorithm the occupancies at any two adjacent detector stations are compared for a significant temporal and spatial difference. Three such differences have to be tested against pre-defined thresholds as follows:  OCCDF > T  ;  x  OCCRDF > T  2  ; and  DOCCTD > T  3  where, /,/ + 1  = upstream, downstream stations (respectively)  25  OCC(i,t)  = 1 minute occupancy at station / , for time interval /  OCCDF  = OCC(j) - OCCii +1)  OCCRDF  = OCCDF / OCC(i)  DOCCTD  = [OCCQ + M - 2) - OCC(i +1, f)] I OCC(i + M - 2)  T ,T ,T  = thresholds set by user  X  2  Z  A t any time the system is in one o f the following two states; •  Incident free state, or  •  Incident state.  A n incident will be triggered i f all three conditions are satisfied within the same time interval. The three thresholds are station-specific and have to be determined in the calibration stage.  This algorithm was very easy to implement, but it had high false alarm rate. This was because there was no mechanism to differentiate between a real incident, and a compression wave or a short-lived flow disturbance. Therefore, the Federal Highway Administration initiated a research study to develop improved incident detection algorithms with better false alarm performances.  Consequently, nine modified versions o f the California algorithm were  developed. They were all defined based on decision trees with states. Table 3-1 identifies the traffic features and characteristics o f these algorithms, (Tignor and Payne, 1977).  26  To eliminate the false alarms due to the short-lived disturbances, a persistence check was developed by Payne et al, (1976). M o r e states were used to enable the system to check the persistence o f the disturbance for a pre-specified number o f time intervals before triggering an alarm. This check decreased the false alarm rate, but obviously at the price o f increased average detection time.  T a b l e 3-1 - C h a r a c t e r i s t i c s o f t h e 10 V e r s i o n s o f t h e C a l i f o r n i a A l g o r i t h m s ( r e p r o d u c e d f r o m T i g n o r a n d P a y n e , 1977) Algorithm  Features used  Comments  Number  OCCDF, OCCRDF, DOCCTD OCCDF, OCCRDF, 2 DOCCTD l l l i i l l i i l l l l l OCCDF, OCCRDF OCCDF, OCCRDF, 4 DOCC l l l l l l l l l l l l l OCCDF, OCCRDF, DOCCTD O C C D F, OCCRDF, • • i i i i OCCDF, OCCRDF, DOCC OCCDF, OCCRDF, DOCC, DOCCTD 1  Essentially the California algorithm Essentially the California algorithm with an incident continuing state Same as #2, but without D O C C T D check Same as #2 but use of D O C C replaces use o f DOCCTD Essentially the California algorithm with a check for persistence #3 with a check for persistence #4 with a check for persistence, best simple algorithm Has form o f #4 plus check for compression wave and persistence, especially effective in "stop-and-go " traffic  9  OCCDF, OCCRDF, DOCC, DOCCTD  #8 without a persistence check, especially effective in "stop-and-go " traffic  10  OCC, OCCRDF, DOCC, SPDTDF  Distinguishes two traffic regimes (light and moderate) for purposes o f detecting incidents  The Features O C C D F , O C C R D F , and D O C C T D were defined earlier. D O C C is the downstream occupancy. S P D T D F is similarly defined in terms o f the volume divided by occupancy at the upstream station.  27  Another source o f false alarms being produced by the original California algorithm in heavy traffic, is the compression waves that occur in the stop-and-go traffic. Compression waves produce large sudden increase in occupancy, which propagates with a speed o f 6 ~ 15 mph (10 ~ 24 km/h) in a direction counter to the flow o f traffic. Therefore a few minutes after these waves have passed the downstream station they should reach the upstream detectors. This is the basis o f check for compression waves. In algorithms #8 and #9, after a compression wave has been detected in the downstream station, incident detection process is suppressed for the following five minutes.  A l l the ten algorithms were evaluated in this report using data obtained from L o s Angeles and Minneapolis freeway surveillance systems by Payne et al. (1976). The algorithms #7 and #8 performed better than the others. Algorithm #7 is similar to the original algorithm with a check for persistence o f the disturbance, but it uses DOCC (downstream occupancy) rather than DOCCTD.  This means that rather than a relative temporal change, the downstream  occupancy is tested against the proper threshold. It was based on the observation that in heavy traffic that has compression waves, the downstream occupancy rarely drops below 20% (in the L o s Angeles data), whereas  incidents generally produce downstream  occupancies  substantially less than 20%. This algorithm has been identified as the best simple algorithm (Tignor and Payne, 1977).  The algorithm #8 has the advantage o f a check for the compression waves, as it was described earlier. The detection o f the compression waves is based on DOCCTD,  the relative temporal  difference in the downstream occupancy. The false alarms being produced by this algorithm or 28  #9 were essentially due to the bottlenecks and the compression wave detection was found to be effective (Tignor and Payne, 1977). The decision tree for an equivalent form o f algorithm #8 is shown in Figure 3.1.  State 0 1 2-5  6 1 8  Compression wave Designates Incident free Compression wave downstream this minute Compression wave downstream 2-5 minutes ago Tentative incident Incident detected Incident continuing  Figure 3.1. Decision Tree for an Equivalent Form of California Algorithm#8 (reproduced from Payne and Tignor, 1978)  29  3.1.2 Exponential Smoothing Algorithms The sudden flow-state changes observed during incidents suggest the application o f shortterm forecasting techniques for detecting irregularities in a time series o f traffic data. Whitson et al. (1969) proposed the use o f a moving average o f the most recent 5 minutes o f volume data as a forecast variable with confidence limits determined from the variance o f the data. C o o k and Cleveland (1974) used double exponential smoothing technique to develop incident detection algorithms. With this technique, the forecast traffic variable z(x,t) is a function o f the past-observed data, geometrically discounted back in time. They used a tracking signal, which is the algebraic sum o f the previous estimation errors divided by the current estimate o f the standard deviation. The tracking signal should fluctuate around zero because the predictions either match the data or compensate for errors in succeeding time periods. Detection is indicated by a significant deviation o f the signal from zero.  The mean absolute deviation - that is used as an estimate o f standard deviation o f the traffic data - is obtained by single exponential smoothing o f the absolute values o f the prediction errors, using a smoothing constant o f 0.1: (3.1) where e(x, t) = Error o f prediction z(x, t) - z(x, t)  (3.2)  a - Smoothing constant.  30  The variable forecast z(x,t) is computed by double exponential smoothing with a smoothing constant o f 0.3, and the tracking signal is found as follows: TS(x,t) = y(x,t)/m(x,t-l)  (3.3)  y(x, t) = y(x, t - 1 ) + e(x, t) = Cumulative error.  (3.4)  Where  A total o f 13 traffic variables were investigated with the exponential smoothing algorithm (Cook and Cleveland, 1974). These variables were selected both from local (or station) and section (or subsystem) variables. In their investigation, they also included the original California algorithm and a group o f five algorithms called T T I . The latter had been developed earlier by Courage and Levin (1968) at the Texas Transportation Institute as part o f the Lodge Freeway Corridor study. They concluded that the most effective detection algorithms were the exponential algorithms using station occupancy, station volume, and station discontinuity (a variable that is based on a comparison o f the kinetic energies o f individual lanes at a station).  3.1.3 Standard Normal Deviate Algorithm A control logic for the automatic operation o f safety warning signs at three locations on the G u l f Freeway in Houston was developed by Dudek and Messer (1973). This was not an A I D algorithm, but it was responsive to stoppage waves and its ability to detect shock waves was reported as being very satisfactory. Later, in 1974, they selected a simple statistical approach to develop a station based automatic incident detection algorithm (Dudek et al., 1974)  31  It was assumed that a high rate o f change in the control variable reflects an incident situation as distinguished from a problem caused by a geometric bottleneck. They proposed use o f the standard normal deviate o f the control variable for the control function. The idea was to evaluate the trends in the control variable (e.g., occupancy, energy) and to recognize when the variable changes rapidly.  The SND is a standardized measure o f the deviation from the mean, in units o f the standard deviation and is expressed by the following relationship: x- x SND =  s  (3.5)  Where x = value o f control variable at time t, x - mean o f the control variable over previous n sampling periods, 5 = standard deviation o f control variable over previous n sampling periods.  Therefore, a large SND value would reflect a major change in operating conditions on a freeway.  Dudek et al. (1974) incorporated this simple method to the previously developed algorithm for detection o f a stoppage wave (Dudek and Messer, 1973). When a shock wave was detected by the latter algorithm, the S N D technique would trigger the alarm i f the threshold was exceeded. T w o operational strategies were used and evaluated. Strategy A required one  32  S N D value to be greater than its threshold while strategy B required that two successive values to be critical. This persistence check prevented some o f the false alarms at the cost o f a longer detection time.  3.1.4 Bay esian Algorithm Another approach to incident detection was proposed by Levin and Krause (1978) from the Illinois Department o f Transportation. They developed a single feature algorithm using Bayesian statistical techniques. The traffic parameter used in their system was  OCCRDF  which was originally used in the California algorithms. They used mathematical techniques to find the optimal threshold, which when exceeded in four consecutive time periods, triggered an alarm.  This method needs the following three databases for its implementation; •  Incident data base,  •  Incident-free data base, and  •  Historical data on the type, location and effects o f the incidents.  The first database is needed to develop / ( Z / ( 7 , ) that represents frequency distribution function o f the feature Z during an incident situation. A similar function f(Z/U ) Q  for  incident-free situations can be developed using the second database. Levin and Krause (1978)  33  developed mathematical expressions for these functions assuming a truncated shifted gamma distribution.  The probability of an incident occurring on a section, PQJf), under certain environmental and traffic conditions can be derived based on its history of capacity reducing incidents, as: P(U  j)  =  7  average no. of incidents occuring on the study section in the time period \  /  r"  (no. of detectors in the study section) • (no. of minutes in the time period)  (3.6)  Clearly the probability of not having any incident is: (3.7)  P(U ) = l-P(U ) 0  1  For any feature value Z, (threshold), the probability of obtaining an incident signal can be expressed as follows: P(l) = P(U ))f(Z 0  I U )dZ + P(U, ))f(Z 0  I U,)dZ  Where bo and bi are the upper bounds for functions f(Z/U ) 0  (3.8)  andf(Z I £/,) respectively.  The probability of not having an incident signal, P(0), can also be found in a similar way. Then, by applying Bayesian considerations, expressions for the following probabilities can be calculated: P(incident 11)  Probability that an incident has occurred, given that an incident signal "1" was output.  P(no incident 10)  Probability that no incident has occurred, given that a non-incident signal "0" was output.  34  The optimal threshold ( Z , ) can be obtained by maximizing the expression: P (incident 11) + P(no incident 10)  (3.9)  Theoretically, this optimization procedure for Z can be repeated for / . ( Z / £/,) to give a set i  of optimal thresholds, where " i " represents consecutive determined time intervals after the detection o f an incident. However, by selecting a feature that is stable, and for which there are no statistically significant differences between the consecutive distribution functions, only one threshold value could be used. Levin and Krause (1978) selected OCCRDF features, considering that OCCRDF  among seven  is very stable and shows considerable difference between  its values before and during the incidents.  The application o f the Bayesian concepts can also be extended to the case o f a string o f signals. The evaluation o f the probability that an incident has occurred, given an n-signal string can provide the decision-maker with more reliable information. Obviously, this comes with the price o f an n-minutes delay, which has practical limitations.  3. h 5 HIOCC Algorithm The High Occupancy method ( H I O C C ) was developed by the Transport and Road Research Laboratory as part o f their Automatic Incident Detection systems to be used in the freeways in England (Collins et al, 1979). H I O C C is a local algorithm that uses occupancy data and tries to detect presence o f stationary or slow moving vehicles as a sign o f an incident downstream.  35  A s the name o f the algorithm suggests, it identifies such vehicles when high occupancy is detected by a loop detector for a long enough period o f time.  In this method the occupancy is sampled with a frequency o f 10 H z , and the number o f times in one second that the detector is occupied is used by the system. This is different from most o f the other systems that use a period o f 20, 30 or 60 seconds for averaging. This measurement whose range is 0-10 is called instantaneous occupancy. It is an indirect measure o f velocity since a small instantaneous occupancy indicates that the passing vehicle has a high speed and vice versa. In the H J O C C method, the alarm will be triggered, i f for two consecutive periods or more the instantaneous occupancy becomes equal or more than the threshold value. A threshold o f 10 is usually selected which means that a 100 percent occupancy for at least two seconds is needed to trigger the alarm. L i k e other ADD methods, the choice o f threshold determines the trade off between the detection rate and false alarm rate.  3.1.6 PATREG Algorithm The Pattern Recognition Algorithm ( P A T R E G ) is another algorithm that was developed along with H I O C C by the Transport and Road Research Laboratory in U K (Collins et al,  1979).  This algorithm is based on the assumption that in an incident-free steady-state flow condition, the traffic pattern observed in an upstream station, should also be observed at the next downstream station. A n estimate o f the average speed in each lane can be found from the  36  measurement o f the time delay o f observing the same traffic pattern in the downstream station (travel time o f individual vehicles). This is done using the following five steps:  1. Measure V (I), a vector containing the 40 most recent values o f the upstream instantaneous flow. 2. Measure V^it), a similar vector containing the 40 most recent values o f the downstream instantaneous flow. 3. Compute DU(i) = V 4.  -V (t-i),  T dwn(t)  up  for i = 1,2,-40  Smooth DU(i) by the formula: MATCH(i) = Q • DU(i) + (1 - Q) •  MATCH(i)  old  5. Compute the speed by ^VT /  max  Where: D = the distance between two consecutive detector stations ^max ~  t  n  e v  a  l  u  e  at which MATCH(i) achieves its maximum  The estimated speed will be compared to the pre-determined lane-specific upper and lower thresholds. T o reduce false alarms due to the short-lived disturbances, the alarm is not triggered until, for a pre-specified number o f consecutive intervals, the estimated speed falls outside its allowable range.  37  3.1.7 AB1MA  Algorithm  In 1977, (Ahmed and Cook, 1977) reported the results o f their study on using the B o x Jenkins technique to develop a forecasting model for freeway traffic stream variables. In their study, they analyzed volume and occupancy data from three freeway surveillance systems in Los Angeles, Minneapolis, and Detroit. They evaluated the performance o f their proposed forecasting model against three other smoothing models: moving average, double exponential smoothing, and Trigg and Leach adaptive model (Trigg and Leach, 1967). They found that an ARTMA  1 6  (0,1,3) model could represent freeway time-series data more accurately than the  other models.  Later, Ahmed and C o o k (1982) presented a station based methodology for the automatic detection o f incidents on freeways based on the developed A R J M A (0,1,3) method. They stated that previous A I D systems had two main problems: high false alarm rates and a need for threshold calibration. They suggested that both problems are related because the threshold levels are not adjusted according to factors that cause the variations in traffic conditions. Therefore, they recommended that an accurate real-time estimation o f these variations could potentially improve the performance o f the A I D systems. Occupancy was selected as the key variable and a confidence interval was constructed by selecting two standard deviations away from the corresponding point estimate. The alarm was triggered i f the observed occupancy value fell outside the confidence interval. The confidence interval was defined as: !  1 6  ; + i  ( ± ) = !,(1)±2CT  (3.10)  Auto Regressive Integrated Moving Average  38  and  X (1) = X - 6V,_, (1) - e e _ (1) - 0 e _ (1) t  t  2  t  2  3  t  3  (3.11)  Where X  = Traffic occupancy observed at time (t +1),  M  X  M  (±)  = Approximate 95 percent confidence limits for  X (1)  = Point forecast made at time t,  e _ (1)  = Forecast error made at time (t - 1 ) ,  t  t  x  X, M  0 ,0 ,0  = Parameters o f a moving average operator o f order 3, and  ex.  = Estimate o f the standard error o f the white-noise variables.  l  2  3  3.1.8 APID Algorithm The A l l Purpose Incident Detection algorithm ( A P I D ) is one o f the two algorithms that were developed for use in the C O M P A S S traffic management system on Highway 401, in the Toronto metropolitan area. This algorithm is section based and is essentially a composite version o f the California algorithms. It uses both occupancy and the speed in its calculations. The A P I D algorithm is composed o f the following major routines as described by Masters et al. (1991); •  General incident detection routine (used in heavy traffic),  •  Light traffic incident detection routine,  •  Medium traffic incident detection routine,  •  Compression wave test routine, and 39  •  Persistence test routine.  The A P E ) algorithm uses OCCDF,OCCRDF,  DOCCTD,  DOCC  and SPDTDF  originally in California algorithms. The last one is defined using speed (SPD) SPDTDF  = {SPD(i, t - 2) - SPD(i, t))/SPD(i, t - 2)  all used  as: (3.12)  In A P I D , first based on an initial test o f the downstream occupancy, one o f the three detection routines for light, medium or heavy traffic is selected. This gives more flexibility to the whole system, because each routine uses a different algorithm and a different set o f threshold values. In heavy traffic, the general incident detection routine is used that tests the same parameters as California algorithm #4. It also checks for the persistence and compression wave, i f they are enabled. In medium traffic another routine is activated that tests OCCRDF  and SPDTDF  against their thresholds, and performs the persistence check. Masters et al. (1991) have not given the details o f their routine for the light traffic condition.  3.1.9 McMaster Algorithms The McMaster algorithm (Gall and Hall, 1989) was developed by the traffic research group from the Department o f Civil Engineering at McMaster University. This method is based on the application o f the catastrophe theory to the freeways' traffic states.  It had been found earlier that speed is not always a continuous function o f occupancy and volume when they are evolving smoothly with time. Often speed happens to "jump" down 40  when the traffic state goes into congestion zone. Navin (1986) suggested that this discontinuous phenomenon in traffic patterns could be explained by the catastrophe theory. Later, the traffic research group in McMaster University developed a catastrophe model for traffic data patterns. They used 30-seconds traffic data from the Queen Elizabeth Way in Ontario to quantitatively validate their model (Dillon and Hall, 1987). It was shown that data gathered upstream o f incidents, while in transition to congested operation fit the catastrophe model much better than the conventional models. Figure 3.2 shows a conceptual view o f the so-called catastrophe plane and data points for both congested and uncongested operation. A s it is shown in this figure, the uncongested operations are confined to a tightly defined line on the edge o f the catastrophe plane. O n the other hand, there is a considerable scatter in the congested operations. This is quite different from the conventional view that these operations happen on an inverted " U " shaped curve. The catastrophe model also allows the jumps to occur across the edge o f the catastrophe plane while flow and occupancy change gradually.  Based on their model and observations o f the 30-second data on the upstream o f some six incidents, Persaud and Hall (1989) suggested that it is possible to develop a new A I D method. The key element in the McMaster algorithm is a template that is drawn in the occupancy flow plane. Figure 3.3-a shows this template which is constructed using historical data. There are three main lines in the template, which define the boundaries o f the four major areas o f the plane. The most important line is a curved one that divides the plane into congested and uncongested regions. This line corresponds to the imaginary edge o f the cusp catastrophe plane where the jumps may occur. In practice, historical data are used to define this line as the Lower bound o f Uncongested Data ( L U D ) . The other two lines O  crit  and V  c r i  t , correspond to 41  the so called critical occupancy and critical volume respectively. The original McMaster algorithm was developed based on this template (Aultman-Hall el al, 1991) but, recently two further subdivisions (in areas 1 and 2) have also been added to this template (Hall et al, 1993). These further subdivisions are particularly used to enable the system to detect the incidents within a section that may experience recurrent congestion. The calibration o f this template has also been substantially changed since it was first suggested. The details o f the calibration process for old and new templates are given by Persaud et al, (1990) and Hall et al. (1993), respectively.  Figure 3.2. Conceptualization of Traffic Operations on a Catastrophe Theory Surface (reproduced from Persaud and Hall, 1989)  42  a - Template for a normal station  40  60  80  100  OCCUPANCY (%) b - Template for a typical station affected by recurrent congestion  Figure 3.3. Flow-Occupancy Template for McMaster Algorithm (Hall et al, 1993)  The McMaster algorithm is a two-stage method. In the first stage o f the McMaster algorithm the traffic state is checked using the template to see whether it is in the congested zone (areas 2, 3, or 4) or uncongested zone (area 1). I f the traffic stays in the congested zone for some consecutive 30-second intervals (usually 3), then the next stage that is the Cause o f Congestion Logic ( C C L ) is activated. This delay, before the start o f the next stage, is a mechanism for persistence check.  The C C L logic tries to distinguish between congestion caused by an incident from one due to other causes. T o do so, it also needs to check the downstream station for the traffic state using the same template. There are two other possible reasons that may cause congestion. One is the recurrent congestion near entrance ramps. F o r example, i f upstream is in the congested zone and downstream is in the area 4, the cause o f congestion is identified as a bottleneck. The other case happens when there is secondary congestion that represents the extension o f primary congestion to the next station in a sequence. Table 3-2 presents the outcome o f the C C L stage in the latest version o f McMaster (Hall et al, 1993).  Hall et al. (1993) also proposed usage o f different templates for stations immediately upstream and downstream o f an entrance ramp where recurrent congestion is often experienced (Figure 3.3-b). In this case, all the categories under area-4 o f Table 3-2 should also be changed to "congestion" accordingly.  44  Table 3-2 - Assessment Procedure in Stage#2 of McMaster Algorithm for Stations Where Recurrent Congestion May Occur (from Hall et al., 1993) Volume-occupancy  Station being checked  area *  I-l 1-1  l l l l l l l l l l l l l l l l i iifiiiiiiiii  4  no con.  no con.  con.  con.  con.  con.  Down-  lllllllll  no con.  no con.  con.  incident  incident  incident  stream  llllllli:  no con.  no con.  con.  incident  incident  incident  station  lllllllll  no con.  no con.  con.  incident  incident  incident  llllillli  no con.  no con.  con.  con.  con.  con.  no con.  no con.  con.  con.  con.  con.  4 * see Figure 3.3-a  no con.  no congestion  con.  : congestion  This method is unique both because of the model used to describe the traffic operations and because of its "two dimensional" view to classify the traffic state. The authors were among the first to address the need to achieve low false alarm rates that are operationally feasible. Hall et al., (1993) have reported good detection rates with extremely low false alarm rates.  In a separate study, Forbes (1992) has proposed a modified McMaster algorithm. In this algorithm, he has used the same logic for the first stage but the second stage (CCL) has been changed to a new logic. His proposed C C L was based on the assumption that a congestion due to an incident causes a rapid change of speed as opposed to recurrent congestion where speed changes continuously. Therefore his proposed C C L included a number of tests on speed and its temporal difference. 45  3.1.10 Methods using Artificial Neural Networks Use o f artificial neural networks is also one o f the recent approaches toward development o f automatic incident detection methods. Artificial neural networks, attempt to, imitate the learning abilities o f the human brain. The concept o f neural networks was first introduced by Rosenblatt (1958). Rumelhart et al. (1986) presented the learning process and several applications o f such networks. These networks have been successfully used for' many pattern classification problems ranging from engineering to financial applications. Chang (1992) from the Texas Transportation Institute developed a prototype for an A I D system that used neural networks. T w o papers have also been published by (Ritchie et al., 1992, 1993) from the University o f California, who have used a similar approach to develop an A I D system.  The artificial neural networks usually consist o f several layers o f the so-called neurons (or processing elements) that are connected to each other. Each neuron is a simple processing element that receives inputs from other neurons or outside the network and in turn produces some output. The massive interconnections among the neurons transmit the output o f the neurons o f one layer to those o f the other layers. Each connection is also assigned a weight factor that defines how strong a message is sent to the other neuron. Contrary to conventional algorithms, there is no explicit memory for the instructions and data; everything is embeded in the connection weights.  46  The structure of neural networks, the number of layers, and the number of neurons in each layer varies based on the type of network and its application. Multi-Layer Feed-forward (MLF) networks are one of the most common types of such networks, in which the output of each neuron is only carried in one direction (forward). Figure 3.4 shows the input and output features of the M L F network used by researchers at the University of California. The three layers shown in this network are usually called input, hidden, and output layers respectively. 17  The weights for each link, as well as thresholds associated with each node, are learned through a process called "training". During the training phase a large number of data patterns and expected outputs are presented to the network to find the best set of weights and thresholds which will later be used in the "recall" phase. Among various training procedures, "Backpropagation" is the one that is often favored for classification type problems where convergence time does not have to be limited or the network size is not very large. A total of 200 data sets of simulated incidents have been used in Backpropagation training of the AID system proposed by Ritchie et al. (1993). They have used another 200 data sets to test the performance of the method.  3.1.11 Methods using Fuzzy Logic Since the introduction of fuzzy set theory by Zadeh (1965), and particularly in the last decade, there has been some extensive research in using the concept and its mathematical formulation in various applications. The use of fuzzy logic has been very successful where decision-making  When counting the layers in a network, the input layer is not counted by some of the researchers. This is because the elements in this layer do not process the input signal and only pass it to the second layer, where the processing starts. 1 7  47  involves uncertainty by nature. It replaces "exact reasoning" with "approximate reasoning' in which the variables can be members o f "some degree", and decisions can be true or false to "some degree".  upstream occupancy at t upstream occupancy at t-1 upstream occupancy at t-2 upstream volume at t upstream volume at t-1 upstream volume at t-2 downstream occupancy at t downstream occupancy at t-1 downstream occupancy at t-2 downstream occupancy at t-3 downstream occupancy at t-4 downstream volume at t downstream volume at t-1 downstream volume at t-2 downstream volume at t-3 downstream volume at t-4  statel: incident free state2: incident  Figure 3.4. Input and Output Features of the M L F used by Ritchie and Cheu, (1993)  In 1994, researchers from Texas Transportation Institute (TTI) proposed using fuzzy logic to develop new A I D systems (Chang and Wang, 1994). They suggested that when comparing control variables, "crisp thresholds" could be replaced by "fuzzy membership functions". They targeted the California#8 method to modify its binary decisions with fuzzy decisions. They defined  four  membership  functions  for  OCCDF,  OCCRDF,  DOCCTD,  and  DOCC variables used in California#8. These variables then could be " L O W " , " M E D I U M ' ' , and/or " H I G H " to "some degree" defined by membership functions. The rules used in the California#8 then would be transformed to fuzzy rules. A n example o f a fuzzy rule is: If LastS  is I N C and OCCRDF  is H I G H then State is C i n e  48  Where Last _S  and State are fuzzy variables, and I N C , C i n e refer to incident and  continuing incident states.  While methods using fuzzy logic overcome the problem o f defining "crisp threshold", the problems o f defining the "membership functions" replaces the former. This often requires some subjective decision to be made by the designer. Neuro-fuzzy systems overcome this as well as cases where defining the rules also impose a problem. In Neuro-fuzzy systems, as the name implies, the learning capabilities o f neural networks are combined with the approximate reasoning capability o f fuzzy logic. In some cases they are designed such that the membership functions and fuzzy rules are learned through a training process.  (Hsiao et al., 1994) published a paper in which they suggested a neuro-fuzzy approach for an A I D system. They formulated problem as a classification type in which having an incident at any time could be "possible" or "impossible'. Their proposed method, Fuzzy Logic Incident Patrol System (FLIPS) identifies the optimal input-output membership functions based on training examples that are constructed from historical data.  F L I P S is essentially a local or "station-based" A I D system in which the input consists o f volume, speed and occupancy. The output o f the system estimates the possibility o f having an incident downstream o f the sensor location where the traffic variables were collected.  49  3.1.12 Other "Indirect" Methods for Freeways The methods discussed so far are mainly the "better known" algorithms that are often referred to in the A I D literature. In this section, a number o f other "indirect" A I D methods that were also cited in the literature are discussed.  In 1980, researchers from M I T took a system identification approach to incident detection (Willski et al., 1980). They used a macroscopic dynamic model describing the evolution o f the spatial-average traffic variables (velocities, volumes, and densities) to develop two detection algorithms based on Multiple M o d e l ( M M ) and Generalized Likelihood Ratio ( G L R ) Techniques.  Bottger (1979) developed a method in which the spatial forecast o f traffic volumes was used as the control variable. This forecast was calculated through an analysis o f the speed distribution and the traffic volume at an upstream detector station. The forecast was then compared against a threshold to decide whether or not the alarm should be triggered.  Cremer (1981) also developed an incident detection method based on a simplified Kalman filter. The system used a so-called disturbance volume as the control variable against a threshold. The control variable was calculated by the use o f an aggregate traffic flow model to explain the decrease o f capacity during an incident.  A cross-correlation technique method has been developed by Busch, (1986). In this method, it was attempted to estimate the speed o f the compression waves by calculating the cross50  correlation function o f the upstream and downstream time series o f traffic density. The estimated speed was then compared to a threshold to trigger the alarm. The alarm could also be triggered, i f for a certain time period, the algorithm failed to find a reliable estimate o f speed.  Busch and Fellendorf (1990) also developed a, so-called, general scheme for incident detection. This method was actually a combination o f a number o f already developed methods. After they evaluated a number o f existing algorithms, (including the three previous methods, California algorithms and exponential smoothing) they used two section algorithms along with a local algorithm to build their general scheme. The previous two methods (Kalman filter and cross-correlation techniques) were used as section algorithms while an exponential smoothing technique was used as the local algorithm.  Kuhne (1984, 1989) used a continuum model for freeway traffic flow in which equilibrium speed o f the static-density relationship was relaxed. H e then used this model to develop an A H ) method. In his method, the anticipated time development speed downstream was calculated from the development o f speed, upstream. The downstream values then were compared to the actual measurements downstream for an indication o f an incident.  A s a result o f a joint project by Australian Road Research Board ( A R R B ) and VicRoads which was initiated in 1988, the A I D method A R R B / V i c R o a d s was developed (Snell et al, 1992), (Sin, 1992). This method employed three sets o f conditions to trigger an alarm. These comparisons were based on: 51  •  Upstream and downstream traffic parameters (occupancy, volume and speed);  •  Adjacent lane traffic parameters; and  •  Temporal difference of traffic parameters.  Another simple comparative method was developed by Chassiacos from the University of Minnesota as his Ph.D. thesis. This method called DELOS (Stephandes et al,  1992), and  (Chassiacos and Stephanedes, 1993a) was based on the application of two simple, low pass filters applied to the spatial difference of the occupancies from upstream and downstream. The first filter used a 3-minute moving average to remove the random fluctuations. If the result from this filter was high enough to indicate a congestion, then the second filter that was a 5minute moving average would be used to check the occupancy spatial difference prior to the three minute duration. This was to distinguish between recurrent congestion and congestion due to an incident.  Two other AID methods were also developed at the University of Minnesota based on the information available through a video based, loop emulator called Autoscope . These 18  methods were called Speed Profile Incident Evaluation System (SPIES) and Autoscope Incident Detection Algorithm (AIDA) (Michalopoulos et al., 1993a, 1993b). SPIES used a pair of speed traps and compared volume-smoothed values of upstream and downstream speed for a significant difference. AIDA system, as claimed by the authors, combined the  1 8  Please refer to the section 3.1.13 for video based sensors that emulate loop detectors  52  strengths o f McMaster and SPIES, as well as taking into account the temporal changes o f traffic variables.  A recent A I D method for homogenous freeways was developed at the University o f California at Berkeley by L i n (1995) as his Ph.D. thesis. This method used the cumulative difference o f upstream and downstream occupancies as its only control variable. This was then compared with a number o f linear bounds that would progress in time. These bounds act as thresholds that would change with time. B y analyzing the condition under incident and non-incident situations, he showed that a finite number o f comparisons were necessary to make the decisions.  3.1.13 Image  Processing Techniques and Direct Methods  A s described earlier in section 3.1, an alternative approach to the "indirect methods" that detect the incidents' effects, rather than the incidents themselves, are provided by some methods incorporating image processing techniques. The video cameras installed along the freeways can "see" the incidents and hence transmit much better information. There has been a few attempts to use the scene image and interpret it directly to detect the incidents. Europeans have been in the forefront o f research and development o f such systems. I N V A L D is the name of a project that was part o f the large D R I V E European program to improve road safety and transport efficiency (Keen and Hoose, 1990). T w o different 'computer vision tools" were developed as part o f the I N V A L D project. These two tools are: •  T I T A N (Blosseville et al., 1989, 1993), and  53  •  I M P A C T S (Hoose, 1989, 1991), and (Hoose etal, 1992).  T I T A N was designed to detect and track individual vehicles that are moving on freeways. The information provided by the trajectory o f the vehicles is analyzed to detect patterns that could be valid indications o f a traffic incident. I M P A C T S uses a different approach and provides a qualitative description o f traffic similar to what human operators could do. Each o f these two systems have strengths and weaknesses, however, they can be used together to complement each other. Applications o f an integrated system and a full scale field test called I N V A I D - I I is described by (Guillen et al., 1992), and (Guillen et al., 1993).  A s reported recently, Chang et al. (1993), a Japanese system has also been developed that detects incidents directly . According to this report in the Japanese A I D system, the 19  individual vehicles are identified by subtracting the digital image from that o f the background scene. The consecutive process o f these subtractions would show the motion o f the vehicles, which i f stopped for a time threshold o f two seconds, would be attributed to an incident.  It should be noted that there are a number o f other researches that use image-processing techniques for freeway surveillance systems but only to provide traffic variables . Therefore, 20  they actually emulate loop detectors by placing virtual sensors on the road. The followings are among such systems: The authurs of this report (Chang et al, 1993) have put the specifics of the reference in their bibliography. The author of this thesis was unable to find the actual paper that has been written by Tsuge et al. There are some exceptional cases where a system has both capabilities such as in ARTEMIS (Automatic Road Traffic Event Monitoring Information System) (Blissett et al., 1993) 1 9  2 0  54  •  Belgian system developed by Theuwissen et al.  (1980) from the Catholic  University o f Leuven. A commercial version o f it called Camera and ComputerAided Traffic Sensor ( C C A T S ) has been developed (Versavel etal., 1989). •  Traffic Research using Image Processing (TRIP), a U K project by a joint team from University o f Manchester Institute o f Science and Technology and University of Sheffield (Waterfall and Dickinson, 1984). A n upgraded model (Dickinson and Wan, 1989) has also been developed.  •  Traffic analysis Using L o w cost Image Processing ( T U L I P ) system is the name o f another British method developed at the University o f Newcastle-upon-Tyne (Rourke and Bell, 1988).  •  A Japanese system called Image Data Scanner and Controller (IDSC) developed at the University o f Tokyo (Takaba et al, 1984).  •  A video based vehicle presence detector developed by the Australian Road Research Board (Dods, 1984).  •  The commercially available Autoscope system designed at the University o f Minnesota (Michalopoulos, 1991).  3.1.14 Other Methods There are still a number o f other A I D research works that are o f less relevance to the present study. Therefore, they will only be briefly mentioned in this section.  55  M o s t o f the A I D algorithms have been designed to detect the incidents on the freeways. However, a few researchers have also tried to develop methods for arterial streets. Vehicles normally move in platoons while travelling in arterial streets because their flow is interrupted. This poses a bigger challenge to the A I D method. Researchers from four universities have addressed this problem. They include: •  Han and M a y (1990a, 1990b) from University o f California at Berkeley,  •  Ivan etal. (1993) from Northwestern University,  •  Chen and Chang (1993a, 1993b) from the University of Maryland, and  •  Bell and Thancanamootoo (1988) from the University o f Newcastle upon Tyne, later followed by Bretherton and B o w e n (1991) from the Transport and Road Research Laboratory ( T R R L ) in the M O N I C A project  The traffic management authorities are mainly interested in detection o f the incidents under medium to heavy traffic conditions because they try to avoid the congestion caused by such incidents. However, for safety reasons, it has also been tried to develop methods that target incidents under light traffic conditions. A method by Fambro and Ritch (1980) from the Texas Transportation Institute (TTI), and one by Yagoda and Buchanan (1991) developed for the Lincoln/Holland tunnels are among these methods. The P A T R E G method discussed in section 3.1.6 has also shown its best performance under lighter traffic conditions.  56  There have also been proposals to use travel information from a number o f designated vehicles to find indications o f congestion and incidents. This approach needs a technology called "Automatic Vehicle Identification" ( A V I ) or "Vehicle to Roadside Communication" ( V R C ) . T w o such methods have been proposed in papers by Hallenbeck et al (1992), and also by Parkany and Bernstein (1993).  3.2 AID Practices In this section, information about a number o f existing A I D systems and related surveillance facilities in the United States and that o f the Toronto area will be presented. M o s t o f the information is the result o f some site visits by Balke (1993) for a research project for the Texas Transportation Institute. Additional information has also been obtained by the author from several departments of transportation in the United States.  3.2.1 Los Angeles, California The freeway surveillance and control system in L o s Angeles area covers more than 264 miles of freeways and is staffed and operates 24 hour a day. The traffic data obtained are the occupancy and volume updated at every 30 seconds interval. Distances between the detectors may range from half a mile in the core area to one mile or more in the outlying areas. T w o versions o f California algorithms are being used for the whole area but the threshold values are zone specific. These algorithms are selected based on the traffic conditions. In heavy traffic (which is essentially during daylight hours), California algorithm #8 which employs a test for the compression wave is used. In lighter traffic condition, the system will be changed 57  to the California algorithm #5. Both o f these algorithms are being used in the original form, and no new algorithm has been added. According to Balke (1993), this is because o f the lack of operational experience with other algorithms. H e has also stated that the operators mainly rely on the reports by the C H P (California Highway Patrol) officers and not on the incidents detected by the computer. Also, in another project, Fait (1994) has compared various means of incident detection in the L o s Angeles area and has found that during the 2 months o f his study only 186 o f the 1698 incidents have been first detected by computer.  3.2.2 Orlando, Florida The Traffic Management Center ( T M C ) in Orlando is operated by Florida Department o f Transportation ( F D O T ) district 5. T M C provides surveillance over 11 miles o f 1-4 freeway. A total o f 387 detectors are placed in each lane o f the freeway at a spacing o f about 1/2 mile. A modified version o f California algorithm is used for detecting the incidents. The C C T V cameras are used for verification. A new algorithm is going to be implemented that will compare the existing speed data with that o f the historical database under specific weather condition (i.e., wet, or dry). I f it shows a substantial difference such as 10-15 miles/hour, an incident will be declared. T M C also has research contract with Professor Al-Deek from University o f Central Florida who is working on the new algorithm. A s a result o f this project, one year's worth o f 30 second data has been gathered.  58  3.2.3 Chicago, Illinois Chicago metropolitan area has been one o f the pioneers in traffic surveillance and control in the United States for about three decades. In the past several algorithms including the California algorithms and Bayesian method have been tried by the Traffic System Center (TSC) o f Illinois Department o f Transportation (IDOT) (Levin and Krause, 1979). According to Chassiakos and Stephandes (1993), the McMaster algorithm has also been evaluated in an off-line test for potential implementation. After a difficult calibration period, good detection rates but unsatisfactory false alarm rates were reported in this test. Currently the T S C uses a very simple comparative A I D method. This method uses the lane occupancy o f adjacent stations in the last five minutes and compares them with some ten thresholds. A n incident is declared when the last five o f the elements o f upstream occupancies time series are greater than their thresholds and similarly the downstream occupancy values are smaller than their thresholds.  This system inherently has a detection time o f over five minutes, but it has been preferred to higher false alarm rates. Consequently, most o f the incidents are detected by other means before the A I D system gives an alarm. On the other hand, Balke (1993) states that in the I D O T philosophy, A I D system is used as a secondary means o f incident detection. It has been designed to help the operators in spotting the possible incident locations that have not been detected yet. It is also used as a training tool for new operators. Therefore, T S C relies on other means o f detection and the experience o f its operators for this purpose.  59  3.2.4 Minneapolis, Minnesota Management and control o f traffic on five interstate  and five state freeways in the  Minneapolis/St. Paul area is provided by the Traffic Management Center ( T M C ) . T M C surveillance covers about 150 miles o f these freeways. A t each station, loop detectors are used to measure volume, occupancy and speed in each lane (Balke, 1993) The measured values are averaged across all the lanes to obtain station averages. The data is compiled every 30 second and sent to the T M C . A modified California type algorithm had been used in the past, when T M C became operational, but because o f high false alarm rates, its use was discontinued. Operating personnel had stated that only one set o f threshold values had been used when the system was operational. Therefore, during the visit by Balke (1993), no operational A I D system existed and detection o f incidents was done by other means.  Researchers from the University o f Minnesota have also used the facilities o f T M C for field tests and evaluation purposes as a result o f which Autoscope system (Michalopoulos, 1991) and D E L O S algorithm (Stephandes and Chassiakos, 1993a) have been developed.  3.2.5 Long Island, New York Since 1987, N e w Y o r k State Department o f Transportation ( N Y D O T ) has implemented a project called I N F O R M (INformation F O R Motorists) on a corridor system in L o n g Island. A s part o f this project, an extensive surveillance system based on loop detectors has also been installed (Balke, 1993). A t first, one o f the modified versions o f California algorithm was used for detection o f incidents. However, because o f the poor performance, particularly high false  60  alarms, the use o f this system was discontinued. According to Balke, (1993), improper calibration could have been the cause o f the problem. This is because, rather than using specific threshold values for each zone, only one set o f thresholds might have been used for the entire system.  Balke (1993) stated that the incident detection mostly depended on the experience o f operators who monitored the traffic condition on a large color-coded wall map. This map showed the measured speed (for each segment o f the road) with different colors, in which red meant a speed o f less than 30 mph that may indicate either an incident or a recurrent congestion. Operators had to use their experience to filter out the recurrent congestion. Generally, the operators would wait until a couple o f "atypical" but consecutive red lights show up, then start investigating the cause.  3.2.6 Toronto, Ontario In early 1991, C O M P A S S , a Freeway Traffic Management System ( F T M S ) started its operation on the Highway 401 in the greater Toronto area. It originally covered over 16 miles o f Highway 401, with a minimum o f 12 lanes that carried more than 300,000 vehicles per day. Installed detectors included both single and double loop detectors that measure speed, volume, and occupancy. A t every station, occupancy and volume data were aggregated across all the lanes and then transmitted to the central computer. One o f the goals o f C O M P A S S was to provide a fast A I D , such that most o f the lane blocking incidents can be detected within the first three minutes o f the occurrence (Korpal, 1992). T o do this the developed software was  61  designed to take advantage o f five different algorithms at once. A t first the APED (Masters et al, 1991) and D E S (Cook and Cleveland, 1974) methods were used in this system. Later switching to McMaster algorithm (Hall et al, 1993) as the primary method for the main lanes was considered. Operational use o f McMaster algorithm has been started since late 1992 and the observed performance in the initial stage has been reported as satisfactory (Balke, 1993).  3.2.7Northern Virginia Virginia Department o f Highways and Transportation operates a Traffic Management System ( T M S ) that provides surveillance and control over 32 miles o f 1-66, 1-395, and Woodraw Wilson Bridge (Dunn and Reiss, 1991). The general idea behind this incident detection system is to detect any kind o f congestion rather than those caused only by incidents. When the congestion is detected, operator identifies the cause o f congestion by visual inspection o f the scene using C C T V cameras. The method used in Northern Virginia is a California type algorithm that has been developed by Sperry Systems Management. The system has been modified so in case o f a detector malfunction, data from historical database as a substitute for the real-time data. The time difference o f downstream occupancy (DOCCTD)  used in the  California algorithms is not used in this system.  Balke (1993) states that the operators were relatively satisfied with the performance o f the system and its calibration. They also felt that the balance o f the performance measures, D R , A D T , and F A R was satisfactory. However, he also has noticed that during his visit most o f the incidents had been detected by the operators before the system could detect them. H e has  62  also observed that the operators preferred to monitor the C C T V rather than the incident display screen.  3.2.8 Seattle, Washington Traffic Systems Management Center ( T S M C ) in greater Seattle provides surveillance and control on 76 miles o f 1-5,1-90, 1-405, and SR-520 freeways (Dunn and Reiss, 1991). L o o p detectors' data is accumulated and one minute moving average o f the volume and occupancy are then calculated. The calculated data is used on a color graphic display that uses color codes to show the level o f congestion. Operators monitor this display for signs o f incident.  In the past, the occupancy data was also used in a California type algorithm for incident detection. However, because o f unsatisfactory performance its use has been discontinued. Drivers who had cellular phones often would call to report an incident within 2 or 3 minutes after its occurrence, while A I D system was less reliable and could take longer. O n the other hand, the algorithm had never been recalibrated since its original calibration. According to Balke (1993), it was felt that, to properly calibrate the algorithm for each freeway zone, some incident data in that zone is required. Although no A I D system is currently used in T S M C , it is believed that eventually an A I D method will be used as an important extra tool in their incident management program.  63  3.3 General Observations and Comments After reviewing the materials reported in the A I D literature, one observes the following points. •  A n enormous effort has been devoted to the development o f the A I D systems in the last three decades.  •  Despite the extensive research effort, only a few o f these methods have been put in practice.  •  Some o f the traffic management centers have given up using their A I D systems, while some others continue using similar ones . 21  •  Only in a few cases, have A I D algorithms been extensively compared with one another.  •  The standard performance measures reported in these cases are  somewhat  contradictory.  T o clarify the probable causes o f this contradiction in results, first, one should notice that the performance measures are often incomparable and any judgment based on them could be misleading. This is because these measures are not obtained under similar situations. F o r example, the capability o f most o f the A I D algorithms to detect an incident is strongly dependent on the following factors:  For example in Seattle, Washington the traffic authorities have abandoned using their AID system that used California algorithm while In Los Angeles, California traffic authorities are using two versions of the same algorithm. 21  64  •  Severity and duration o f the incident;  •  Operating conditions at which the incident has occurred;  •  Detector spacing and the location o f the incident with respect to the detectors; and  •  Highway geometrical factors such as: grade, lane drop, ramps, etc.  Moreover, the definition o f incident varies from one report to the other. F o r example, in some cases, a stopped car on the shoulder is not considered an incident. Also the calculation o f average detection time is usually very subjective because the exact time o f the incident is not known.  On the other hand, even when a research group has evaluated its method with some other methods using the same set o f data, one may find large differences between the reported measures. There have been some suggestions that this might be due to different driving habits that change from place to place. Calibration could also be a major contributing factor in this regard. It is reasonable to assume that in most cases a research group is expert in calibration of its own method while it may not have the same expertise on the others' methods. A l l o f these factors have contributed to a common belief that the results o f the A I D systems are not "transferable".  Different reactions o f traffic management centers to A I D systems, even when they are using the same system, also requires consideration. For example, all the six sites in the United States  65  visited by Balke (1993) had been using some type o f California methods; three o f these sites have discontinued their use. This can be attributed to two major reasons. •  The author believes that the proper calibration o f an A I D method is as important as the selection o f the method (if not more important). O n the other hand, the calibration process for many o f the existing methods requires considerable time and effort, and is location specific. A considerable amount o f incident data are often needed which has not necessarily been available for all sections o f a highway. This may lead to poor performance when the necessary effort has not been put into the calibration process.  •  It can also be assumed that the general expectation from an A I D system compared to other means o f detection differs from place to place. In L o s Angeles, about 11% o f the incidents are first detected by the A I D system. While this may seem to be a very poor performance in other places, A I D is still used by the operators in L o s Angeles as an extra means o f detection.  In other words, the actual performance o f the A I D system on one hand, and the expected level o f performance by the Traffic Management Authorities on the other, defines the acceptance or rejection o f the system. Performance is traditionally measured using three quantities (i.e., D R , F A R and A D T ) . Among these measures, false alarm rate appears to be the critical one. This is because it defines how often the operators would have to react to the false alarms. I f this occurs too frequently, the operators tend to ignore the alarms among which could be true alarms due to an incident. Therefore a maximum acceptable level o f false alarm rate has to be  66  set, beyond which, the system would need a re-calibration. A n example may present some idea about the necessary order o f magnitudes for F A R . A n A I D system with an updating period o f 30 seconds that employs 50 detector stations requires 100 decisions to be made every minute. Therefore, a seemingly low false alarm rate o f 1% for such a system implies that operators would have to react to an average o f one false alarm every minute.  The existence and ease o f use o f a verification mechanism for the alarms also plays an important role. Existence o f a video surveillance system with full coverage o f the site would greatly decrease the time required for the verification o f the alarms. A n integration o f the A I D system with video surveillance is easy to implement. Right after an alarm the image from the closest camera(s) to the scene o f the suspected incident would appear on the monitor. This would allow the operator to quickly respond to the alarm by either rejecting it as a false alarm, or initiating the appropriate and preplanned response. Successful implementation o f such an integrated detection and verification system would also increase the level o f tolerance o f the operators to the false alarms. This in turn would positively affect the performance o f the A I D system because a higher acceptable level o f F A R leads to a higher D R and a lower A D T .  A D T could be considered as the second important measure o f performance. The contribution o f each means o f detection depends on how quickly the incident can be detected. Clearly, i f it takes too long for the A I D method to detect an incident and therefore it is detected by other means, its detection rate would be irrelevant. A s an example, the operators in Seattle, who discontinued their use o f A I D system would receive cellular calls starting as early as two to  67  three minutes after the incident has occurred . It takes some time to process such calls and find the location o f the suspected incident or congestion. In many cases, the callers are not sure o f the source o f congestion and/or cannot state their exact location on the freeway. However, this shows that considering the increasingly widespread use o f cellular phones, there is a certain limit for detection time beyond which the detection o f the incident by an A I D system does not count. The author believes that it would be safe to assume that after the first five minutes from the onset o f the incident, it has been detected by some other means. Therefore, in calculation o f D R and A D T , one most probably does not need to consider beyond the first five minutes after the onset o f the incident for all practical purposes. In addition, considering the above discussions, the A D T for the incidents detected within the first five minutes should preferably be under two minutes. This means, that in most o f the cases, by the time the traffic management center starts receiving cellular calls, the operator has already verified the source o f the congestion.  Based on the above discussions, one can see that in order to have a practical scenario in which the A I D system is effectively used by the operator and is a major contributor to the overall detection system, the following steps are to be taken. •  Setting an acceptable level o f false alarm rate depending on the number o f stations involved, staffing level and existence o f a video surveillance system;  Personal Communications with Mr. Bill Legg, Washington State Department of Transportation, Seattle, Washington, August 1994.  68  •  Calibrating the A I D system such that the expected F A R is about the maximum level set earlier while the expected A D T is as l o w as possible ; and 23  •  Evaluating the D R for the first few minutes to estimate the portion o f the incidents that could be first detected by the A I D system. Clearly this figure needs to lie within the acceptable range for the traffic management authorities to justify its use.  The above discussion shows that many o f the figures that are presented for the performance measures in the literature have no practical use. Only a slim minority o f the methods is able to produce good enough detection within the first minutes with a low enough F A R . The direct A I D systems potentially can detect the incidents in a very short time. However, direct A I D systems are more expensive to implement and operate because in order to have reliable coverage, the camera intervals should be limited to 300~400 meters at best . Moreover, their 24  performance is sensitive to the lighting and weather conditions. Therefore, direct ADD systems can be used for critical locations where the frequency o f incidents are higher, or the consequences are more severe, to justify the additional investment.  Putting aside the direct A I D systems, one can see that other methods suffer from a single drawback that causes a considerable time delay before detection. Virtually no A I D method In most AID methods, there is more than one threshold or parameter to be set. In other words there is more than one degree of freedom involved in calibration and the resulting performance This means that at least theoretically it is possible to maintain a FAR while ADT can vary within a limited range. At first, it may seem that the same type of cameras needs to be installed to provide enough coverage for verification purposes. However, two points in this regard are important to notice. One is that the spacing between adjacent cameras used only for verification purpose, could be 2-5 times of what is needed to directly detect the incidents. Second is that the technical requirements to optimize the function of cameras for detection 2 3  2 4  69  would activate the alarm unless the congestion caused by the incident has been sensed at the closest upstream sensory station. This is while in many cases, a persistence check also adds to this delay because the persistence check is often activated after the congestion has been sensed in the upstream sensor location. The actual time delay, as discussed in Chapter 7, depends on many factors but on average is two or more minutes. Considering the previous discussions about the importance o f detecting incidents within the first 2-3 minutes, it is easy to see that existing A I D methods are very limited in achieving an effective operational performance. This study has tried to improve the expected operational performance by targeting this very limitation. In Chapter 7 there are more discussions about this limitation o f A I D systems and how the method presented here overcomes it.  and verification are not necessarily the same: Particularly considering that for verification the cameras often need pan, tilt, and zoom capabilities, while for direct detection, a stationary view of the scene is preferred.  70  CHAPTER 4 METHODOLOGY OF THE  STUDY  The outline and methodology o f this study will be presented in this chapter. The details o f each step will be discussed in the chapters that follow.  A s was shown in the literature review, it is not possible to compare the performances o f various ADD methods directly. This is because the reported performance measures are not reasonably consistent with each other. This lack o f transferability is mainly because geometric features, incident severity, and many other parameters that affect the performance, have not been identical in these cases. Therefore, to have a reasonable assessment o f the ADD method developed in this thesis and for other ADD methods, it is necessary to compare them based on identical data sets.  The incident severity, its location, and the geometry o f the highway affect the performance o f the method. It is possible to observe a good performance from a method under certain conditions and yet have another method outperform the first under different circumstances. Therefore, it is necessary to select the data sets to cover a wide range o f operational and geometric conditions so that the results are a better representative o f the performance o f each method. This would allow an assessment o f the robustness o f the ADD systems, and whether they can maintain their performance when used under varying conditions.  71  During the course o f this study, real data from the sensors were not available and, therefore, it was decided to use simulated data. Care was taken in every step from selection o f the simulation program to assigning the model parameters so that the simulation model was as close as possible to the real traffic operation o f a selected study site. Furthermore, the number o f simulated data were selected much higher than what is usually reported in the A I D literature to ensure that a high confidence can be attributed to the results.  Simulated data, by nature, is a representation o f the real data. N o matter how good the modeling, there is never a guarantee that the representation is an exact replica o f the system. Despite the validity o f the above statement, it is very important to notice that there is nothing in the simulation model that could be assumed as biased towards or against any o f the A I D methods. Therefore, the generated data should pose the same challenge to all o f the methods. Consequently, although it is possible for the performance measures to be different from what could be obtained from real data, there is no reason why the order that A I D methods perform should be any different . 25  B y using simulated data, there are some added advantages, particularly in the investigative stages o f the study. One such advantage is that the number o f incidents can be as high as desired. It is also possible to put the incident anywhere that is desired and under any  This particularly can be stated most confidently when there is a sigriificant difference between the results of two competing methods. As it will be shown in section 8.1, such a significant difference is observed in the results (i.e., DR in the first 2-3 minutes after the occurrence of incident or FAR) of the method proposed here and two other existing methods. 2 5  72  circumstances. It also provides the flexibility in designing the data sets to keep a number o f factors constant while only changing one. This provides the opportunity to better analyze the results and identify contributing factors in the analysis.  A few o f the existing methods had to be selected whose performance could be compared with that o f the U B C - A I D method developed in this study. However, due to the lack o f consistency in reported performances, it was not possible to identify the best existing algorithms. Therefore, to select methods for comparison purposes, the following two preferences were considered: •  Methods that have been operationally used and evaluated; and  •  Methods whose exact or close reconstruction based on the published information was possible.  Among the many existing methods, only a few have been operationally used. Various versions of the California methods are among the most widely used ADD methods. Traditionally, they have often been selected for comparison purposes in literature. The original developers evaluated California#7 and California#8 as the "best simple" and "most effective in stop-andgo traffic" algorithms respectively (Tignor and Payne, 1977). These two versions o f the California methods were selected in early trials o f the U B C - A I D system.  The McMaster Method, which has been favored and used in recent years by the Ontario Ministry o f Transportation on the Queen Elizabeth Way and Highway 401, was also selected  73  for comparison purposes. This method, with its unique approach, has shown a good detection rate with a very low false alarm rate. The method could be reconstructed but its calibration is somewhat difficult.  Neural network approach developed by Ritchie and Cheu (1993) and the neuro-fuzzy approach by Hsiao et al, (1994) were two recent methods that were also considered because o f their merits. However, since reconstruction o f both methods required knowledge o f some parameters (e.g. momentum and learning rate for the neural network) that were not available, they were not used in this study . However, their approaches will be discussed later. 26  A basic form o f the U B C ADD method was developed and its performance as well as performances o f the selected algorithms (i.e., California #7, #8, and M c M a s t e r ) were tested. 27  The results o f the first series o f tests showed that California #7 was under performing the California#8 consistently. This meant that for all ranges o f conditions, California#8 performed better. Therefore, later, the rest o f the tests only continued with the other two methods.  The results o f the first series o f tests showed the feasibility o f idea behind the U B C ADD system as well as showing some difficulties. The final form o f the U B C ADD system then was developed and tested with new and expanded series o f data sets. The data sets were used to  However, these approaches as well as other indirect AID methods are not expected to outperform UBC AID methods because of the limitations that will be discussed in Chapter 7. It should also be mentioned that initially Double Exponential Smoothing (DES) method (Cook and Cleveland 1974) was also programmed and tested but very early results showed that its performance are by far worse than the others. Therefore, even the tests with the first set of data series for this method were abandoned. 2 6  2 7  74  compare the performance of the proposed system with the performance of the California#8 and McMaster methods. A detailed analysis of the results also provided a comprehensive view of the robustness of the proposed system as the conditions change.  75  CHAPTER 5 SIMULATION OF TRAFFIC FLOW UNDER INCIDENT AND INCIDENT-FREE CONDITIONS  To assess the performance o f the U B C A I D method and compare it with the performance o f California#8 and McMaster, a simulation o f traffic flow under incident and  incident-free  conditions was necessary. In this chapter first the simulation program selected for this purpose will be discussed. Then the steps taken to build a simulation model based on the selected program will be explained.  5.1 Simulation Program Traffic simulation programs can be categorized based on their level o f detail into either macroscopic or microscopic. Macroscopic simulation programs use traffic flow theory to model relations among flow parameters such as density, speed, etc. Microscopic simulation programs generate a higher degree o f detail by modeling the behavior o f individual vehicles. For the purposes o f this study, it was necessary to use microscopic simulation, because it was required to generate detector signals, as they would be collected in the real world.  TRAF  family o f simulation programs contains  the most comprehensive  and  powerful  simulation packages thus far developed for the United States Federal Highway Administration ( F H W A ) . The T R A F series include both macroscopic and microscopic simulation components  76  for a range o f applications including freeways, urban networks, and rural roads. FRESEVI, the freeway microscopic simulation component was selected for this study. It is the enhanced version o f its predecessor, E N T R A S simulation program.  In F R E S I M ,  individual vehicles are assigned characteristics such as position, speed,  acceleration, etc. These characteristics change in response to other vehicles and geometric conditions as the vehicles move along the freeway. The freeway geometric conditions that can be directly represented in F R E S I M , which are more important in this study, include: •  One to five through-lane freeway mainlines with one to three lane ramps,  •  Variations in grade, radius o f curvature, and superelevation,  •  Lane additions and lane drops anywhere on the freeway,  •  Freeway blockage or capacity reducing incidents (rubbernecks), and  •  Auxiliary lanes, which are used by traffic to begin or end the lane changing process, and to enter or exit the freeway.  However, a shortcoming in F R E S I M is that it is not possible to directly model the effect o f reduced lane width.  There are also a number o f operational features that are incorporated in F R E S I M (Federal Highway Administration, 1994). Those features that were important in this study include: •  Comprehensive lane-changing model,  77  •  Comprehensive representation of the freeway surveillance system,  •  Representation o f six different vehicle types, including two types o f passenger cars and four truck types, each type having its own performance capabilities such as acceleration, etc.,  •  Heavy vehicle movement, which may be biased or restricted to certain lanes,  •  Differences in driver habits, which are modeled by defining ten different driver types ranging from timid to aggressive drivers, and  •  Vehicle reaction to upcoming geometric changes. The user may specify warning signs to influence the lane changing behavior o f the vehicles approaching a lane drop, incident, or off-ramp.  Vehicle type, driver type, turning movements, and other attributes are assigned to each entity, as they are about to enter the freeway system. F R E S I M uses pre-defined distributions and a random number generator to assign these attributes. T o replicate the inherent randomness o f the real world, a number o f other events such as lane changing, gap acceptance, etc. are also simulated, based on their probability o f occurrence.  F R E S I M uses the "Car following rules" to determine a vehicle's acceleration/deceleration as a function o f the distance and speed o f the leading vehicle and its driver type. After each time increment, new states will be calculated for every vehicle in the system. A t the beginning o f the simulation, the freeway system is empty. During a so-called initialization period, the  78  freeways segments are filled with vehicles and simulation continues until the interaction o f the vehicles with each other and with the system geometry reaches an equilibrium point. Obviously, the random nature o f the system does not allow the system to reach an absolute steady flow. However, after a few minutes , the input and the output o f the system would be 28  close enough so that one can assume that equilibrium has been attained.  In this study, it was necessary to generate traffic parameters such as occupancy, volume, and speed with double loop detectors as i f they were generated by real sensors. The surveillance systems that can be simulated in F R E S I M include single loops, double loops, and Doppler radar. They can provide the necessary data averaged over any required time period.  Considering the capabilities o f F R E S I M , it seemed well suited for this study. However, the output files generated by F R E S I M are often very long and in a report-like format. A small program was written to extract the required traffic parameters from the output files generated by F R E S I M .  5.2 Simulation Model for This Study To build the simulation model for this study using F R E S I M , the following steps were taken:  28  •  Selection of the study site,  •  Obtaining the required parameters, and  This refers to the simulated time.  79  •  Selection o f the detector locations.  The following sections discuss each step o f building the simulation model.  5.2.1 Study Site The main objective o f the simulation part o f the study was to generate a large number o f incident and incident-free data sets. This was to be done so that a wide range o f conditions, including operational and geometric variables could be provided for this study. Therefore, the first step was to select a study site, such that it included as many geometric variations, onramps, and off-ramps as well as bottlenecks in a relatively short length (3-4 km).  A site, about 2-km east o f the Grandview on-ramp in the eastbound lanes o f the Trans-Canada Highway  ( T C H ) in Burnaby, British  Columbia,  was  suggested  by the  Ministry  of  Transportation and Highways. The safety branch o f the Ministry at the time was considering this site for installing detector stations because o f its high frequency o f incidents.  In this study a larger portion o f the T C H was selected that covered, as a part, the length that was originally suggested. The total length o f the selected site is 3,640 m. It starts from 290 meters west o f the Boundary off-ramp (just after the Lougheed Highway underpass) and extends to 1290 meters east o f Willingdon Street on-ramp as shown in Figure 5.1. This section provides a higher variation in grade, curvature, and superelevation. It also includes a total o f five ramps o f which a total length o f 1340 meters is considered as part o f the study 80  site. They include Boundary off-ramp, Grandview on-ramp, Willingdon off-ramps  (South,  North), and Willingdon on-ramp.  Figure 5.1. Selected Study Site along Trans-Canada Highway  5.2.2 Required Parameters To build the input data sets for the simulation model, two main groups o f data had to be found or assumed.  81  •  The parameters that represent the geometry and traffic operations o f the study site which should remain constant during the simulation (e.g., geometrical parameters of the location); and  •  The parameters that will change from case to case and represent the variations that are the subject o f the study (e.g., location, severity o f the incidents).  In this and the subsequent section, the first group o f data is discussed. The second group will be discussed when data sets are being explained.  F R E S I M uses a link-node arrangement to describe the geometric and operational parameters of the freeway. Links represent one-directional segments o f the roadway that have reasonably constant geometric features. Nodes represent locations in the roadway where geometrical features change significantly (e.g., ramp merges, changes in curvature). In order to build the link-node diagram, five points along the T C H that represent the tips o f the on- and off-ramps were identified. However, to identify other points the following parameters were considered as well; •  Grade,  •  Superelevation, and  •  Curvature.  82  These parameters change continuously along the highway. T o select node locations, these values had to be obtained as a function o f the distance traveled from a base point along the highway. The Ministry's P h o t o l o g system was used for this purpose, and in every frame (2029  meter s apart) each o f the above parameters was recorded. The road scenes from the Photolog system were also used to find exit signs, and starting or ending points o f the lanes for the ramps, etc. This highly detailed data, along with survey maps, was then used to identify major points o f change for curvature, grades, or superelevation to be used as nodes. However, the Photolog system was not available for the ramps. Therefore, only the survey maps were used to estimate the grade and the radius o f curvature.  Consequently, a total o f 19 nodes were selected along the freeway, and some other 14 nodes were selected to represent changes on the ramp links. The Link-node representation o f the study site and its geometry are shown in Figure 5.2. The estimated geometric parameters used in the simulation model are presented in Table 5-1. Although the program is designed to work with both systems o f units, in the available version, the metric system had not been enabled by F R E S E V l providers. Therefore, in this thesis a mixture o f both units can be seen.  Photolog system consists of a number of laser videodiscs that contain pictorial information and engineering details of the highway system.  83  I A-Link-Mode Diagram |  CSOeC)  Boundary Off-ramp  Grandwiew on-ramp  B»tt*J  Willingdon off-ramp (S)  <£H)  Willingdon off-ramp (N)  Det#4  Willingdon on-ramp  Detector Stations Node Number Incident Locations  |a?)w]  Note: Link lengths, incidents' and detectors' locations are to scale. Curvatures are not shown Angles are not correct Trans Canada Highway 3640 meters of Eastbound 290 m wrest of Boundary off-ramp to 1290 m east of WUHngdon on-ramp  -Dei?*  Figure 5.2. L i n k - N o d e D i a g r a m a n d H i g h w a y Geometry for the Study Site  84  Table 5-1 - Geometric Parameters Used in the Simulation Model Link I "> 14 45 5 fi 67 7X 89 9 10 10 11 11 12 12 n n  14  14 IS 15 16 16 17 17 IS IX 19 1 11 31 12 91 97 92 91 91 9 1 11 1 1 111 112 n PI  Odometer starting node I0Q0 1190 1180 1450 1550 1880 2010 2140 2120 2770 2940 1240 1440 1660 1740 4060 4110 4510 _ _ _  _ _  121 122  _  122 121 114 111 111 112 112 111 131 13  _ _ _ _  -  1 .ength  Grade  Superelevation  (m) inn 190 70 100 110 110 110 180 450 170 100 200 220 80 120 250 200 220 170 40 1 so 150 100 100 100 50 110 80 80 50 110 50  (%) +1 +1 +1 +1 +1 -1 -1 0 -2 0 0 0 0 0 0 +1 0 -2 -fi 0  (%) -1 +1 +1 0 -1 -4 +2 +1 -1 -1 -1 0 -1 -5 -1 -1 -4 -4  w  _  -fi  -5 0 +1 +1 0 0 0 0 0 0 o  _  Radius of Curvature (ft) 1700 1700 1700 _  1100 1100 _  _ _ _ _ _ _  2400 74S 55 150  _  _  _  470 150 75 50  _  90 _  -  150  The free flow speed was another parameter that had to be specified for each link. This was set to 50 mph (80 km/hr) considering the posted speed on the highway. Among the ramps, only the Willingdon off-ramp (N) had a posted speed o f 30 km/hr. F o r most o f the ramps, this parameter was selected as 37 mph (60 km/hr). The exceptions were due to the following point. F R E S I M does not allow the radius o f curvature to be smaller than some specified value. However, for example, the Willingdon off-ramp (N) has a highly curved ramp segment that is 85  not within the allowable range. A closer examination o f the effects o f the curvature and superelevation shows that they are used to find some safe upper bound for the desired free flow speed. Therefore, in such cases, after calculating this bound, it can be entered directly as the desired free flow speed. This bound is calculated in F R E S I M manual (Federal Highway Administration, 1994) as:  V = Jl5R(e + f)  (5=1).  Where V =Upper bound for vehicle speed, mph R = Radius o f curvature, feet e = Rate o f roadway superelevation, feet/foot / = Coefficient o f friction for a given pavement condition  The traffic operation and the percentages o f trucks and heavy vehicles are also required for the simulation model. A 6% value was used in this model. This was estimated based on the actual counts from a tape recording at the site. Furthermore, it was observed that most o f the trucks were biased to the lane #1 . This means that they will only use the second lane when 30  passing another vehicle, after which time they return to the first lane. In F R E S I M , this bias o f the heavy vehicles can be incorporated.  In FRESIM, through lanes are counted from right to left. Therefore, in this thesis lane#l, or first lane refers to the rightmost through lane.  86  Traffic volumes entering each entry node or diverting off the freeway through off-ramps are also part o f the necessary data for the simulation. However, since the purpose o f this study is to evaluate the effects o f the changing conditions and since these volumes are changing by time, they are discussed later in the next chapter where data sets are presented.  In addition to the parameters discussed above, there are a number o f other parameters such as the duration o f time periods and time increments, etc. that can be arbitrarily selected. Still, there are a number o f other parameters, which can optionally be defined when building the simulation model. There are some default values for such parameters that will be used in cases where the user does not provide additional and more accurate data. Almost none o f such parameters is usually available. Some typical cases are as follows: •  Driver type distribution and car-following sensitivity factors;  •  Various vehicle types and their specifications (e.g., length, jerk value);  •  Maximum acceleration for each range o f speed;  •  Coefficients o f friction for various pavement types;  •  L a g to accelerate/decelerate;  •  Time to complete a lane change;  •  Lane changing probability; and  •  Percentage o f cooperative drivers.  87  These parameters clearly show, at least potentially, how comprehensive the simulation model can be. However, when these parameters are not available, the default values have to be used. The degree o f sensitivity o f the results to most o f these parameters is unknown at present.  5.2.3 Detector Locations Locating the surveillance systems is one o f the major steps in defining the simulation model for the study on A I D algorithms. After a search in the literature, only a few cases were found where the effects o f the detector spacing on the performance o f some specific A I D methods had been studied (Goldblatt, 1980). However, no general guideline concerning location o f the detectors, particularly with respect to ramps, was found.  Since the traffic close to the ramps is less uniform it is reasonable to suspect that placing the detectors too close to the ramps may have an adverse effect on the reliability o f the measured data. However, the question remains that even in such a case, how close is too close, and whether this applies equally to both off-ramps and on-ramps.  One may imagine that the reliability o f the A I D detection system will greatly depend on the randomness o f the measured traffic data. In other words the more uniform the incident-free data are, the higher the possibility o f detecting an incident and the lower the possibility o f producing false alarms. Therefore, any added disturbance other than the normal randomness of the traffic parameters will contribute to a less favorable situation for the A I D algorithms. Around the on-ramps the vehicles (mainly in the first lane) have to slow down to  88  accommodate the merging vehicles, while merging vehicles are also trying to find a gap and then accelerate to the main stream speed. This causes an obvious disturbance to the flow and increases the occupancy. In contrast, the area around the off-ramps should be much less disturbed because no merging is involved.  In this thesis, the standard deviation o f the occupancies recorded over time, at any given point, is suggested as a good quantitative measure. T o use this measure in studying the effects o f the ramps on the uniformity o f the traffic flow, a very large number o f simulated detectors are necessary to collect traffic data along the freeway oyer time. T o do this, 60 detector stations were defined along the freeway. The detector station #1 was arbitrarily selected 100 ft downstream o f the starting node. The other stations were placed 200-ft (61-m) apart along the freeway. F R E S I M handles a maximum o f 37 stations at a time. Therefore, coverage o f the whole freeway with detectors was done in two complementary parts, with detector number #31 in common as a check. Each complete set was simulated for a number o f cases for 15 minutes without any incident. Consequently, a detailed picture o f the changes in traffic parameters were generated.  The resulting average and standard deviation o f the recorded occupancies for two o f the simulated cases are presented in Figure 5.3. In the first case, only the passenger cars were present, while in the other case, 6% o f the vehicles were assumed to be trucks and heavy vehicles biased to the first lane (as were used in the generation o f the main data sets).  89  Referring to Figure 5.3, first, the occupancy changes o f the first lane for the case with trucks is considered. A s expected, in the upper diagram, six plateaus can be identified that correspond to the six zones o f varying lengths whose volumes will be constant (e.g., between Boundary off-ramp and Grandview on-ramp). The height o f these plateaus increases or decreases based on whether there is an upcoming on-ramp or off-ramp respectively. However, in the area close to the Grandview on-ramp, the average occupancy is much higher than expected. This starts some 100 feet upstream o f the on-ramp node, and continues for another 700-900 feet. A similar, but less strong jump in the occupancy, can also be seen in the area around the Willingdon on-ramp. This can easily be explained, based on the reactions o f the drivers o f both merging vehicles and those on the first lane o f the freeway.  90  Boundary  Grandview  Willingdon  Willingdon  Willingdon  off-ramp  on-ramp  off-ramp (S)  off-ramp (N)  on-ramp  X  1  I  /  Lane-1,6% Trucks Lane-2,6% Trucks Lane-1, no Trucks Lane-2, no Trucks  i  i  i  i  i  i  i  i  i  i i  i  i  i  i  i  i  i i  i  i  i  i  i  t  i  i  i  i  > i  i  i  i  i  )  i  i  i  i  i  i  i  i  i  i  i  i  Detector Station (#)  F i g u r e 5.3. V a r i a t i o n of Occupancy along the Freeway for Incident-Free Cases  However, a more interesting parameter to consider is the standard deviation o f the occupancy in the lower diagram. It shows a somewhat constant level o f variation for occupancy that is due to "normal" traffic fluctuations. Almost within the same areas that there were jumps in the average occupancy, one can observe the jumps in the standard deviation as well. This is an important factor to be considered when placing the detector stations for A I D purposes. It gives some quantified measure o f the expected reliability o f the data. A high variation o f measured occupancy under normal traffic condition will contribute to high false alarm rates. 91  The Grandview on-ramp carries almost twice as much traffic as does the Willingdon on-ramp. Interestingly, this is also reflected in the magnitudes o f the jumps for these two on-ramps both in the lower and the upper diagram.  A s expected, the occupancy in the second lane is much more uniform than the one in the first lane. In addition, it is interesting to compare the case having no trucks with the previous one. Although the same jumps can be seen in this case as well, they are substantially weaker, and a shorter length o f the freeway is affected. This shows how sensitive the data will be to the percentage o f the trucks and heavy vehicles. This can be attributed to the limitations in the maneuverability and performance o f heavy vehicles.  These tests also confirmed that off-ramps do not introduce any significant change in the normal level o f variations. They only reduce the average occupancy level.  Since any such simulation program uses random numbers, its output is unavoidably at most an isolated case o f what could have happened in real life. This is true no matter how well the simulation program and the model used are. Therefore, meaningful decisions have to be based on a statistically significant number o f experiments. In this case, also a number o f tests were done to make sure that the idea and its premises are valid.  Based on these tests, proper locations for the detector stations can be found from Figure 5.3. For this study, the spacing between the detectors was arbitrarily selected to be constant and an integer multiple o f the 200-ft used in the earlier tests. Using a distance, o f 2200-ft (670-m),  92  one can place six detector stations along the selected segment o f the freeway. A s shown in Figure 5.3, the detector stations 4, 15, 26, 37, 48, and 59 were selected as the six detector locations to be used in the main simulation model. This selection puts one o f the stations (#4) within the region where the auxiliary (deceleration) lane for the Boundary off-ramp extends. However, this is not expected to affect the reliability o f the data for A I D purposes, but it means that the measured volumes may not correspond to any specific point in the freeway . 31  They neither correspond to the volume of the flow before, nor to the volume after the off-ramp. This is because some of vehicles that are going to exit the freeway will not pass the sensor.  93  CHAPTER 6 DATA SETS  A large number o f data sets were necessary so that the A I D methods could be tested under both incident and incident-free conditions. During the course o f this study, many such data sets were generated and used. T w o main series that were used will be discussed here . 32  6.1 First Series of Data Sets To generate a large number o f incident and incident-free data sets, a smaller number o f incident cases can be simulated in which the recorded detector data before the occurrence o f the incident represent the incident free-data sets. Similarly, those after the occurrence o f the incident represent the incident data sets. A ten-minute window o f the recorded data was selected for each incident case when the incident starts just after the fifth minute and continues to the end o f the simulation. The detectors record the average volume, speed, headway, and occupancy every thirty seconds.  A s it was mentioned earlier, it is desirable to study the performance o f the A I D algorithms under various conditions. This provides an opportunity to examine the robustness o f the A I D  A smaller number of intermediate data sets were also generated and used when various options for the UBC AID method were being explored.  94  methods. T o cover a wide range o f incident cases under various conditions, the following parameters can be varied: •  Incident location (location along the freeway, closeness to the upstream or downstream detectors, lane);  •  Incident severity; and  •  Time, or traffic volume.  Incident location can be represented in three ways as mentioned earlier. The effects o f the geometry, on-ramps, and off-ramps in the performance o f each A I D algorithm can be taken into account by placing the simulated incident in each detection zone. In our case, where we have five detector stations this gives us five possible zones for simulating the incident.  Furthermore, the performance o f each algorithm may depend on the closeness o f the incident to the detectors. In other words, it may depend on whether the incident is closer to the upstream detector, the downstream detector, or the middle o f the zone. T o consider this, any number o f locations could have been chosen within each zone. In this study, to keep the total number o f combinations manageable, three locations per zone were selected.  The effects o f the incidents happening in the first lane on the traffic are certainly different from those in the second lane. Therefore, two choices o f lanes were considered for each location along the highway.  95  The severity o f an incident is also a major factor in its effect on the traffic and therefore the possibility o f its detection. T o study this, three degrees o f severity were considered in generating the data sets. The highest severity was a full blockage in one lane and a rubberneck factor o f 10% in the other lane, plus a capacity reduction downstream o f the incident. The second and third degrees o f severity were a capacity reduction o f 90% and 80% in the specified lane, respectively.  The traffic volume and percentage o f inflow and outflow o f the freeway and its ramps will change significantly with time o f day. T o consider this change in the present study, the required data for volumes have to be treated as a variable. Since the time window is not very large, the change o f traffic volumes during each simulation can be neglected. However, each case can be assumed to happen at a different time o f day. A total o f ten times for a typical weekday was selected to be used in the generation o f the data sets. T o obtain the necessary values, the most recent hourly counts from the study site were obtained from the Ministry o f Transportation and Highways. The values representing the volumes over the weekdays were averaged to obtain the typical weekday traffic volume on an hourly basis. The figures representing traffic volumes for the 8  th  to the 17 hour o f the day are shown in Table 6-1. th  Those representing 9 , 11 , 13 , 15 , and 17 hours were used in the first set o f data series. th  th  th  th  th  Considering the five different parameters discussed above, one can find a combination o f 450 incident cases as: (5 zones)(3 location / zone)(2 lanes)(3 levels o f incident severity)(5 times o f day)  96  Table 6-1 - Traffic Volumes used in the Simulation Data Sets  Time  08 09  Volume o f T C H (East Bound) Between 1 st Ave. & Boundary  Volume o f Boundary off-ramp  Volume o f Grandview on-ramp  10 1 1 12 13 14 15 16  3750 3549 2687 2379 2652 2598 2988 3184 3208  697 593 413 428 460 412 465 528 506  1006 1070 1130 1154 992  17  3063  593  910  1143 1065 839 867  Volume o f Willingdon off-ramp  Volume o f Willingdon off-ramp  (S)  CN)  Volume o f Willingdon on-ramp  449 442 434 490 574 638 690  490 482  205 238 178 140 154 164 178 227 199  483  199  531  670 744 529 397 395 414 450  688 617  6.2 Second Series of Data Sets A s will be discussed in Chapter 7, after preliminary tests with the first series o f data sets showed promising results, a new expanded series was considered.  The new series required a larger number o f data sets so that 50-60,000 incident /non-incident decisions could be made based on them. Furthermore, the implementation o f U B C A I D required some extra 15 minutes worth o f data before making the first decision. This series was generated using the same general structure but with the following changes:  97  •  All of the hourly volume data from 8 a.m. to 5 p.m. were used to represent 10 times of day (twice as many as thefirstseries);  •  Each case was simulated for 35 minutes in which the incident occurs at t=25 min; and  •  Only lane blocking incidents were simulated and the two other levels of severity were not used.  6.3 Data Set Names and Structures The data sets in both of the series were named in a structured manner. This allowed the characteristic of each incident case be represented in a concise coded format . The coding 33  structure is as follows: •  Three locations per zone are called: "U", " M " , and " D " to represent the locations closer to the upstream detector, in mid-span of the zone, or closer to the downstream detector. They are located on 1/6, 1/2, 5/6 of the spacing (2200 ft, or 670 m) from the upstream detectors. This provided a uniform distribution of the incidents irrespective of the geometry of thefreeway(see Figure 5.2- B).  •  The zones were numbered by their upstream detector station number; i.e., "1", ... "5" (see Figure 5.2 - B).  In addition, by selecting small codes, it is possible to use these names as the file names on the computer as well  98  •  The lanes were numbered as "1", and "2", in which "1" refers to the rightmost lane.  •  Three levels o f severity were called: "A", "B", and " C " , in which the order is from highest severity to the lowest. In the second series o f incident cases only severity level o f " A " was simulated.  •  Times during a day were designated by the 24 hour notation as: "09", or "17".  •  T o distinguish the second extended series o f data sets an "e" was added to the end of the name structure.  A s an example, the case "U31A13e" refers to one o f second series in which the incident was located close to the upstream point o f the third detection zone (i.e., 1/6 o f detector spacing after detector station #3). It occurred in the first lane and with a high degree of severity. The assumed time o f the day at the beginning o f the simulation was around 13:00 or 1:00 p.m. . 34  To reduce the overall time spent on the simulation, and for easier handling o f the data sets, a batch o f thirty incident locations for any combination o f the incident severity and time o f day was processed using one data file. T o manage this and as a wild character for other purposes, the character " X " was replaced in any position o f the name structure where it referred to all the possible combinations. Therefore "XXXB09" represents all the thirty incident cases o f the first series with medium severity occurring about 9 a.m..  99  6.4 Effects of the Random Number Seed Another parameter that can optionally be set in F R E S I M is a random number seed that is used in the generation o f the random number series. This number may not seem important for most o f the users. However, in this study, using the same number for all the incident cases has an important implication. F o r example, considering the 90 incident cases represented  by  " X X X X 0 9 " , one can see that all the conditions are exactly the same for the first five minutes. It is only after the occurrence o f the incident that each individual case will result in a different sequence o f events. Therefore, the incident-free databases consisting o f the first five minutes are not independent o f each other and lack the required randomness that one expects to generate with simulation. Although, even by putting aside this part o f the data, some incidentfree data will remain useful, it would be more reasonable to use all o f the simulated data.  To overcome this problem, the random number seed within each o f the mentioned 90 cases should be different (while they can remain the same when the time o f day is changed). The F R E S I M can input an eight-digit number whose default value is 00007581. In this study, the four leftmost digits were arbitrarily selected to be changed sequentially.  Since the vehicle counts provided by the Ministry are labeled as 1 hour, 2 hour, etc,, theyratherrepresent the situation that is closer to 12:30, 1:30, etc. However, since the time is not directly being used in this study this is of no significance and has no effect on the validity of the results. st  nd  100  CHAPTER 7 D E V E L O P M E N T O F T H E UBC AID S Y S T E M  In this chapter, first the traffic flow before and after the occurrence o f an incident will be discussed. It will be shown that as a result o f an incident, the characteristics o f the flow are changed upstream and downstream o f the incident through propagation o f two waves. It will also be shown that part o f the useful information carried by these waves has been overlooked in other methods. Based on the discussion that follows a concept for the proposed system will be introduced. After testing the general feasibility o f the proposed system, using its basic version, the final form will be presented.  7.1 ShockWaves and Expansion Waves To analyze the state o f traffic before and after the occurrence o f an incident the so-called "fundamental flow diagrams" can be considered. The state o f traffic in a macroscopic sense is often described by flow rate, density, and speed through a set o f diagrams each showing the relationship between two of them. Figure 7.1 shows one o f diagrams in which an inverted " U " curve represents the relationship between flow rate and density. Various shapes and formulas for this curve have been proposed by traffic engineering. However these shapes and formulas are not discussed here and the general shape shown will only be used to discuss some concepts and characteristics o f the traffic flow before and after the occurrence o f an incident. The concepts and characteristics presented here are valid irrespective o f the curves selected.  101  Density Figure 7.1. Fundamental Flow Diagram and Effects of Incident  The "space mean speed" ( u ) can also be found for any point on the flow-density curve as the s  slope o f the connecting line between the origin and the point in question. The space mean speed is calculated from division o f flow by density. The value for density may range from zero (empty road) to a maximum called "jam density" (fcj ). The maximum flow rate that is possible for a road is called "capacity" (q J  termed the "critical density" (k )c  ) and the density at which this occurs is often max  The flow takes a zero value under two conditions; empty  road and traffic jam where the speed is zero. The highest possible speed is called "free flow speed' and can be represented by the slope o f the tangent to the curve at the origin.  102  The critical density divides the curve into two distinctive regimes. Thefirstpart of the curve in which an increase in demand is accommodated by an increase in density and flow rate is called "freeflow".In this regime, the speed remains close to thefreeflowspeed, while in the second part of the curve, 'Torced flow", speed is very sensitive to density. The second regime represents the condition under which the drivers are too close to each other to drive "freely" and an increase in density would substantially reduce their speed. In this regime the further a point is to the right the higher the degree of congestion would be. It can be seen that an efficient use of the facility is achieved when the flow rate is close to the capacity without entering forced flow regime.  The effects of an incident on the traffic flow can be discussed using Figure 7.2. When an incident occurs in a highway operating underfreeflow condition, the capacity within a limited segment of the highway would be reduced. Theflow-densityrelationship in this section is represented by the dashed curve in Figure 7.2-a. The traffic operation outside of this limited segment still follows the original curve but its volume could be controlled by the capacity at the incident location. If this capacity were less than the demand (volume prior to the incident) as shown by point "U", the volume in the immediate area upstream and downstream of the incident would be decreased to the capacity at the incident location. This requires a change in density that differs on two sides of the incident.  The occurrence of an incident reduces the effective demand downstream of its location and hence the condition would proportionally change to a lower density represented by point "L". 103  While there is a difference in the density o f points " U " and " L " , their speed difference is minor, i f at all different . O n the other hand, the condition on the upstream clearly would no 35  longer allow free flow traffic and a high-density area will be formed in which the speed is significantly reduced. This is represented by a jump from point " U " to point " H " in the diagram. There is also a significant reduction in the speed upstream as a result o f lost capacity.  The difference between the arrival and departure rate o f flow causes the vehicles to be accumulated behind the incident location and this causes the high-density area to progress in the opposite direction o f the traffic flow. The edge o f this area moves backwards and is called the "shock wave". In the same sense, the difference between the flow rates at the incident location and that o f the "undisturbed flow" represented by " U " causes the low-density area to progress forward in the direction o f the traffic. The front edge o f this area that causes an "expansion" o f the gaps between vehicles is termed the "expansion wave ". 36  The speed o f the shock wave can be analyzed using Figure 7.2-b. In this figure, the backward progression o f the high-density area is shown in which after a time increment o f At the length o f this area has increased by A £ . The increase in the number o f accumulated vehicles during At in the freeway segment shown can be found as: Number o f vehicles entered- Number o f vehicles exited or,  This is unless the operating condition prior to the incident is very close to the capacity. In the literature, the term "shock wave" has also been used for this wave. However, the author prefers the term "expansion wave" both considering the effect it has on the traffic and being consistent with the terms used in compressible fluid flow. 3 5  3 6  104  Exp. Wave C)  a)  ^e  E  o o  €  u  f—  Det. St# i+1—  f-4-  Incident Location Density  Det. St# i —  Shock. 'Wave  x  d)  b)  Occj+i  Traffic Flow Direction  Occj  'ML kjj  k  H  Shock Wave Direction  Voli  +  Incident Location *  L.  Volj i  Spdi  1  r  i i I  i i L  1  1  I  I  i T  Spdj  r  I  —r—  +a  +  +2  Time  + *5  Figure 7.2. Effects of an Incident on Traffic F l o w Variables  105  In which q  u  and q  denote flow rates for undisturbed and high-density areas respectively.  H  O n the other hand, the same increase in accumulated vehicles can be expressed using densities o f the undisturbed and high-density area (k  u  and  Considering that only the density in  the length Al has changed, the mentioned increase can be calculated as:  Therefore, the speed o f the shock wave can be calculated as  ikn ku)  This also corresponds to the slope o f the cord connecting the points " U " and H " as shown in C C  Figure 7.1. It is also noticeable that the slope o f this cord is negative and is consistent with the fact that shock waves move in the direction opposite to the traffic flow. In a similar way the speed o f the expansion wave can be calculated as:  (ku ~kj) Where q  L  and  are flow rate and density for low-density area. This speed can also be  interpreted as the slope o f the cord connecting points " U " and " L " in Figure 7.1. This geometric interpretation also shows that speed o f the expansion wave, the speed o f the vehicles before the incident, and that o f the downstream are all almost the same and equal to  106  the free flow speed, unless the condition is too close to capacity. It can easily be seen that the speed o f the shock wave is much lower than that o f the expansion wave because while the numerator in both formulas are the same, the denominator for the shock wave is much larger.  The "news" o f an incident is carried by the shock and expansion waves to the detector stations. The time taken for this news to be sensed depends on the speed o f the waves and the distance between the incident location and the closest upstream and downstream stations. While the incident location could be anywhere within the spacing between two adjacent detector stations on average one can put it in the middle o f that spacing. One such example is shown in Figure 7.2-c in which progression o f both waves is shown. It is notable that the lines representing shock wave and expansion wave are parallel to the cords " U F T ' and " U L " o f Figure 7.2-a respectively. In Figure 7.2-d the signals measured by upstream and downstream detectors are shown. Clearly, the magnitude o f changes in occupancy and speed sensed by the upstream detectors are much larger than those by downstream detectors. However, these changes in upstream signals occur with a much larger delay as expected.  The strength and the speed o f the shock wave depend on the capacity to demand ratio before the incident and the severity o f the incident. A s shown in Figure 7.3-a a higher demand or lower capacity to demand ratio causes a higher speed for the shock wave. However, the figure also shows that the capacity to demand ratio has little or no effect on the speed o f the expansion wave. Figure 7.3-b shows that a more severe incident that causes a larger reduction o f capacity generates a stronger shock that moves faster. The strength o f the shock relates to  107  the jump in density that is caused by it. In this case, also the severity of the incident has almost no effect on the speed of the expansion wave, which remains close to the free flow speed.  \  a)  o P3  I y / / j /  """*"•» ^  X ^ "**" ***  II I  ^  X.  Density ' b,  wRate  i  // Y B I 7/  //  I  /  *  " ***  II '  X.  **• >.  V.  x. V  /'  - ""  ™'  *" — —  •*•»  X.  V.  \  \  X  >.  \ N  ** -_  * V  x  -»  \ f  \ X \  x N  "* «^  *•  \ X X v  ** -_.  *•«_  X. X **• X v, X —. \ \ *** ^ X .  Density  Figure 7.3. Effects of a) Demand and b) Incident Severity on Shock and Expansion Waves  108  The cases and effects discussed so far excluded the normal traffic fluctuations and nonhomogeneity in the roadways. It also used simplified models where the jumps or drops in traffic parameters occur instantaneously. However, in reality signals sensed by the detectors contain noise, are not uniform, and are less abrupt. A more realistic view of the changes in traffic parameters as could be sensed by detectors is presented in Figure 7.4. It shows the occupancy readingsfroma series of six detector stations along the study site before and after the occurrence of a lane-blocking incident. The typical changes of the occupancy and the progression of both waves can be seen in this figure. In this example, the speed of the expansion wave is about 4-6 times that of the shock wave.  F i g u r e 7.4. V a r i a t i o n s o f t h e L a n e O c c u p a n c y f o r a T y p i c a l L a n e B l o c k i n g I n c i d e n t  109  To compare the magnitude of the changes due to the arrival of the shock and expansion waves with magnitude of the normal fluctuations, the readings from two immediate detector stations are plotted in Figure 7.5. To do this comparison, the conditions before and after the arrival of the waves are averaged to find an imaginary stationary level for each case. One can see that the occupancy jump due to arrival of the shock wave is distinguishable from those normally present in a signal. This is while the occupancy drop due to arrival of the expansion wave is not much larger than those caused by existing noises. In fact, one may be mislead by a random drop that occurs in the upstream occupancy at t=480s as a sign of arrival of an expansion wave.  0 J 0  ^—-I  90  180  1  1  270  360  1 450  1 540  —I 630  1  1  720  810  , 900  Time (s)  St#3 and St#4 represent upstream and downstream respectively (reproduced from Figure 7.4). (s) Lines represent stationary conditions before and after the arrivals of the corresponding waves. Figure 7.5. O c c u p a n c y V a r i a t i o n s at F i r s t Upstream a n d Downstream Stations for a T y p i c a l Incident 110  The above discussions about shock and expansion waves and their important implications if used as indications of an incident in AID systems are summarized in Table 7-1.  Table 7-1 - The Advantages and Disadvantages of Using Shock and Expansion Waves for an ADD System Advantages  Disadvantages  Shock wave  The magnitudes of the changes sensed in occupancy and speed are much larger and more distinguishable from random fluctuations. The changes are more abrupt and do not damp out as the shock wave progresses. This helps making more reliable decisions  The speed of the shock wave is a function of the severity of the incident and the capacity-to-demand ratio. The shock waves move much slower and there is a long delay before their presence is felt at upstream station and consequently a high ADT.  Expansion wave  The speed of the wave is almost The speed almost does not change independent of the incident and the occupancy drops are not severity and capacity-to-demand much larger than the random ratio. The expansion wave moves fluctuations in volume' . The almost as fast as the vehicles changes may damp out as the wave downstream of the incident that progresses. The changes are also could lead to early detection of sensitive to presence of ramps. This the incident. may lead to higher FAR. 7  Considering that passage of a shock wave is much easier to detect than that of an expansion wave, the AID methods have always focused on detection of shock waves. Therefore, as mentioned earlier local or station based algorithms depend on detecting congestion that occurs after the arrival of the shock wave. Double exponential smoothing and the neuro-fuzzy method by Hsiao are among these local methods. The expansion wave produces the same drop in volume as does the shock wave because the volume needs to be continuous to satisfy the fundamental conservation law.  Ill  A s opposed to "station-based" algorithms, in "section-based" algorithms the information from both upstream and downstream is used. Some o f these methods are called "comparative" in which a comparison o f the upstream and downstream is used. California methods and A P I D are among these methods. In comparative methods, mainly a strong enough difference between the occupancy o f upstream and downstream is counted as an indication o f an incident. Therefore, considering that changes in downstream are small, the mentioned difference only could be sensed after the shock wave has passed the upstream detector station. A close look at other algorithm shows that one way or the other the congestion needs to be present before the alarm is triggered and i f downstream condition is also checked it works as a supplementary check.  Among "section" algorithms, there are only t w o  38  seemingly exceptions for above statement:  The neural method by Ritchie (1992) and the Australian neural method by Rose and D i a (1995). In both o f these cases, the traffic parameters o f upstream and downstream are both fed to a neural network whose task is to classify the inputs as an incident or non-incident pattern. However, knowledge o f how the weights are calculated during the training period and the training sets reveals that the effects o f the shock wave would easily outweigh those o f the expansion wave. This causes the network to be less sensitive to the changes in downstream (if  McMaster may also be assumed to be of this category since it uses the downstream information as well, but it does it after the congestion has been detected and persisted in the upstream station. See page2 of paper by Hall etal. (1993).  112  at all). Therefore, in these cases too, the incident would remain undetected until the shock i  •  39  wave passes the upstream station .  Based on the above discussions one could conclude that despite the diversity o f the approaches used by indirect A I D methods, they share a common characteristic that acts as an important drawback. This characteristic is that the potential for an earlier detection by using the information carried by the expansion wave has been ignored. Therefore, the detection occurs only after the congestion has already developed. This as discussed in section 3.3, substantially reduces the operational effectiveness o f using ADD systems. T o target this very drawback, the method developed in the present study, uses information carried by both waves.  The potential reduction in detection time can be analyzed by considering the characteristics o f the two waves. I f both waves shared the same speed and strength, the detection time could be reduced by half because the incident "news" had to travel only half the distance. Given the higher speed o f the expansion wave an even further reduction o f detection time could be expected. However, the lower strength o f the expansion wave reduces the reliability o f the decisions made based on it and therefore the full potential can not be realized. The degree to which the A D T can be reduced mainly depends on the degree that the "news" carried by the expansion wave can be reliably utilized. In the following sections, the basic concept and the details o f the proposed U B C A I D system will be presented in which it is tried to take advantage o f both waves in a reliable manner.  This can also be concluded from examining the reported detection times of both methods.  113  7.2 Basic U B C System As was shown earlier, a substantial reduction of detection time can be expected if the information carried by both shock and expansion waves are exploited. It was also mentioned that changes made by expansion waves were either neglected by existing AID methods or were examined in such a way that they were outweighed by dominant changes caused by shock waves. This hints that the effects caused by each wave have to be analyzed independently. This would allow each wave to be treated differently and according to its own characteristics.  The logic of the proposed AID system can be described using Figure 7.6. It consists of the following three stages: •  Preprocessing  •  Classification  •  Decision making  In the core of the proposed system, there are two classifiers each dedicated to analyzing the condition as to whether a shock wave or an expansion wave is present at a sensory station. At first however, the signals from each sensory station may be preprocessed in a way that is most suitable for each classifier. The decision whether or not to trigger an alarm would be made after the outputs of the two classifiers - as evidences of the occurrence of an incident - are examined at the third stage. This structure allows upstream and downstream signals to be analyzed independently while the result of the analysis can then be combined.  114  Decision making  O  O [  1  r  !  i  1  ; Classified 1 ! I Classifier#2 ! '_  o  '  Preprocessing  F i g u r e 7.6. General Scheme of the Proposed System  After a number of initial tests, using the general structure proposed above, a basic version of the proposed system was developed. This basic version, shown in Figure 7.7, was mainly used to study the feasibility of applying the strategies discussed earlier. In the following sections, various stages of this basic version will be discussed.  7.2.1 Input Features and Preprocessing The study site has two lanes and therefore for each detector station there will be two sets of detectors. Assuming that measurements of occupancy, speed, volume and headway are available for each detector set, there will be eight main measurements as potential input per  115  station . As mentioned in section 2.2, among these measurements all but the headway have 40  been used in the existing AID methods. The headway can easily be obtained from the signals of "0" and "1" from the detectors in a similar way to the other measures. The time between two consecutive transitions from "0" to "1" will be the headway between consecutive vehicles.  Shock wave Detector  Expansion wave Detector  Smoothing by Moving Average  Upstream Measurements  Downstream Measurements  Figure 7.7. T h e First Prototype o f the U B C A I D System  Since the point mentioned in section 2.2 about headway and volume has not been considered in FRESIM, the average headway and volume are treated as independent of each other.  4 0  116  When selecting the inputs for a classifier, often, but not necessarily always , it is avoided to 41  use inputs whose information content would be redundant. Among the four variables stated, this condition might apply to volume and headway depending on how the headway is calculated. A n examination o f the F R E S I M showed that the product o f headway and volume (with a compatible set o f units) is close but not necessarily equal to unity. Therefore the two variables are not fully redundant and this would be truer for smaller values o f volume.  There are a number o f ways to use the measured values for each lane. In other ADZ) methods usually the measurements are averaged over the adjacent lanes, This has the drawback that it reduces the sharpness o f the resulted change when the waves arrive at the station. This is because the effects are usually stronger and arrive sooner in the same lane that the incident has occurred. Therefore using individual measurements increases the chances o f early detection. In the first set o f trials, the classifiers were used for each lane rather than for each station. This provides a chance to not only detect the incidents sooner, but also to specify which lane contains the suspected incident. This second factor is not very important and can be found through other means. The first factor is important but comes at a price o f doubling the number of decisions made per unit time for a two-lane freeway. Doubling the number o f decisions potentially doubles the number o f expected false alarms as well. Therefore, in the basic version, the mentioned features for both lanes were used together for both o f the classifiers. Using these eight features, one decision per station was made for each classifier. In addition In the proposed UBC AID system, as will be discussed in section 7.2.2, neural networks are used for classification. When used as function approximations, neural networks show better performance to model addition or subtraction compared to multiplication or division. Consequently, depending on the underlying factors, a neural network used for classification may also show a different performance when some input 41  117  there were instances when a substantial difference between values o f same feature (e.g., Volume) measured in two lanes was a further indication o f the existence o f an incident.  As  mentioned earlier, when the expansion wave reaches  the detectors,  downstream  measurements will experience minor and less abrupt changes. These changes are generally in the same order o f the present signal noise. This means a lower signal to noise ratio that makes it hard for the classifier to distinguish between a temporary change due to normal traffic fluctuations and the one that is caused by an incident. F o r this version, a moving average with span o f two periods was used to smooth the signals prior to feeding them into the downstream classifier.  A moving average o f the two periods smoothes the signal by substantially reducing the magnitude o f the noise but it may also delay the detection. This is because the magnitudes o f the changes due to the arrival o f any o f the waves would be halved in the first period. When considering the shock waves the upstream signals experience more abrupt and larger changes in the measured parameters. This makes it easier for the upstream classifier to distinguish between a change due to the arrival o f a shock wave and a change due to noise. Therefore, there is no need to use smoothing for upstream signals and potentially delay the detection o f an already slow-moving but strong shock wave. In case o f the expansion wave however, the reliability o f the signal demands using a moving average filter to smooth out noise, even though it may come at the cost o f a delayed detection. This is particularly justified when  features are inverted before using, despite the fact that they carry the same information.  118  noting that the expansion wave although weaker moves much faster, and in a majority o f cases, the first sign o f arrival may be observed after one sampling period.  7.2.2 Upstream and Downstream Classifiers Neural networks as discussed in section 3.1.10, have been successfully used for many classification problems. Their ability to handle noise and even missing data makes them ideal for various applications. They are also inherently nonlinear, and therefore they have a higher chance to perform reasonably well for nonlinear problems. This is in contrast to the statistical methods that are mathematically sound but suffer from some linear and constraining assumptions.  A s shown in Figure 7.7, in the basic version, a three-layer feed-forward neural network was used for both classifiers. Each classifier had eight input nodes for the previously mentioned input features. The hidden layers and the output layers had five and one node, respectively. This combination for the classifiers was selected after a number o f trials that showed a better performance.  A s was mentioned earlier, in neural networks, the weights associated with each link connecting two nodes are set during a process called training. During this process a set o f already classified patterns, called "training data set", along with their desired outputs are fed into the network. The weights are then adjusted during consecutive trial and errors such that the error, or difference between the desired and actual outputs, is minimum. After the training  119  stage is completed, a pattern fed to the input nodes in the so-called recall stage can be classified by the network . 42  Backpropagation method is widely used in training o f feed-forward neural networks. The details o f this training process are explained by Rumelhart et al. (1986). Backpropagation produces a good classification performance but the training process is very slow for large networks. However, the approach o f the present method only requires a small network that does not require extensive training time. Furthermore, the idea in the present A I D system is to train the network off-line and only the recall will be done on-line. Since recall does not require any trial and error and only needs a few calculations, it would only take a fraction o f a second to classify data patterns o f an operational site.  Another important point to consider while training is that a prolonged training or a small ratio of training patterns to the link numbers can produce what is often called over-fitting or overtraining. The main reason to present a sample o f patterns for training is to enable the network to generalize from that sample to the original population where that sample has been selected. Obviously using a sample with too few patterns, or a non-randomly selected sample produce a bias towards a specific region o f the population. The same problem may arise when either there are too many weights to be adjusted or the network is excessively presented with the training sample. Consequently, i f after the training, the network is tested with another sample from the same population, it will show a substantially larger error. The feed-forward neural networks are used in many applications other than the classification as well. However, since in this thesis they are used for classification purpose the sentences above refer to specific role. 4 2  120  In the present case a 6 - t o - l ratio for the number o f patterns to the weight (and biases) provides a good condition for the network. Early stoppage o f the training can also be used to reduce the risk o f over-training. Often there is a "test data set" whose error is calculated along with that o f the training set. However, contrary to the training set the error o f the test set has no direct effect on the calculation o f the network weights. A s long as both errors are decreasing, the training may continue but an increase o f test error signals the critical point for stopping the training process. Each o f the two classifiers was trained by a randomly selected pattern set o f X X X A 1 1 set o f the simulated data. They were also tested during the training process by the other half o f the same set o f data. In each case the outputs for patterns that represented the condition prior the occurrence o f the incident were labeled as "0". In a similar way the outputs for the rest o f the patterns were labeled as "1".  7.2.3 Decision Making Block The outputs o f the classifiers are fed into the decision making block whose task is to examine the evidence and trigger the alarm i f necessary. In the basic version, this block consists o f two thresholds and one " O R " gate.  Although in the training set incident and non-incident patterns are represented by " 1 " and " 0 " values, the outputs o f network can be any number (e.g., in the range -0.1 to +1.1). The larger the number, the higher the resemblance o f the pattern to an incident situation is expected to be, and vice versa. However, no other conclusion can be made about the output o f such a  121  network. For example, a probability can not be attached to the output value directly . Therefore, mostly a simple threshold defines whether the pattern belongs to a class or not. This threshold, for a two-class problem is often selected to have a value o f 0.5.  A threshold value o f 0.5 tends to provide an unbiased misclassification for each o f the classes involved. However, although this might be good for problems with "symmetric" classes, its use is not justified for the problem at hand. The main reasons for using biased thresholds are as follows: •  A s it will be shown in section 7.2.4, a substantial number o f patterns would be "mislabeled" as incidents. They represent the measurements at the time after the incident has occurred, but before the effects o f the incident have been sensed by the detectors. This mislabeling is somewhat unavoidable because defining the exact time when the incident effects reach the detector is not always possible, whether in the simulation data or in the real situation.  •  In the simulated data, the number o f incident and non-incident patterns have been selected to be equal. In reality however, the incidents happen rarely and therefore incident patterns are much less available than the non-incident patterns. One can either select the same number from the non-incident data to keep the balance, or use an appropriate weight to compensate for the imbalance. Otherwise a biased threshold would be necessary.  The probabilistic Neural networks are exceptions in the sense that each of their outputs are assumed to be the probability that a pattern belongs to a specific class.  122  •  Most  important  o f all is that  the  consequences  o f the  two  types  of  misclassifications are not the same. The tolerance for a false alarm should be much less than the opposite error that happens when an incident has been missed for one interval. The ratio o f the former to the latter might be in the order o f 1 to 100 or lower.  On the other hand, this threshold can be effectively used as a calibrating parameter. Although in the present form it does not assign a confidence to the output o f the network, it is certain that a higher output means a higher confidence. Therefore, by increasing the threshold one can increase the expected confidence before accepting the network's decision as true. Considering the earlier discussions it is obvious that a higher threshold for the downstream classifier should be used to compensate for its lower expected reliability.  In the first prototype the outputs o f the classifiers after passing through their thresholds are combined with an " O R " gate. The result will be an alarm i f the output o f any o f the two classifiers is more than the respective threshold. B y using an " O R " gate the false alarm rates o f the two classifiers will be summed. Therefore, it is necessary to adjust the thresholds such that the false alarm rate is in the acceptable region for the operator. However, the benefits o f using an " O R " gate are more than its drawbacks. Consequently, larger number o f incidents will be detected and the average detection time will be reduced because there is no need to wait for the effects o f the incidents to reach the detector locations at both sides.  123  Another factor to consider is whether to use any persistence check to reduce false alarms due to short-term fluctuations o f the traffic parameters. Using smoothing for signals fed to the upstream classifier or selecting higher thresholds for each o f the classifiers are alternative ways o f tackling the problem. After a number o f trials, it was found that using the persistence check unnecessarily adds to the detection time. While by smoothing and proper calibration one can get a better A D T for the same F A R . Therefore, using smoothed values for downstream measurements and higher values for the thresholds were the measures that were taken to alleviate the effects o f short-term perturbations and achieve desired reliability.  7.2.4 Training Data Sets To evaluate the basic version o f the proposed system, the first series o f data sets were used in which for each case an incident had occurred five minute after the start o f simulation  44  within  a 10-minute time window. The four main measurements were recorded once every 30 seconds for each individual detector. B y putting together sequential readings o f the immediate detectors at either side o f the incident, one can get twenty patterns per incident case for each classifier. The first ten o f these patterns have been labeled as non-incident and the other ten as incident.  The true number o f incident labels for each pattern set should always be less than ten, but this practice was due to lack o f an exact knowledge o f the onset o f the arrival o f waves (particularly the expansion wave). A closer look at what happens after an incident will clarify  124  this and some other points. Figure 7.8 shows volume and occupancy readings as a function o f time from four selected examples. The incident in all o f the cases has occurred at T=300 seconds. A l l o f the charts have been drawn with the same scale to make it easy to compare readings o f one example with another.  The first readings are from the upstream detector about 560 meters away from the incident. Although the incident has occurred at 9 a.m., it has taken about 4.5 minutes for its effects to be sensed by the upstream detector. Therefore, the incident from the point o f view o f an observer at upstream detector position has occurred at t=570 seconds, and only the last two o f the patterns should have been labeled as incidents. A similar incident that occurs at about 11 a.m. may not be sensed and is undetectable by other methods within the first 5 minutes o f the incident. Fluctuations in the volume and occupancies are also noticeable, as they are quite typical o f what is expected during normal traffic. This is a straightforward case as far as labeling the onset o f the arrival o f the shock wave is concerned.  This refers to the start of simulation after some "start up period" in which the vehicles fill the road until equilibrium of the inflow and outflow is achieved for all of the links.  125  a) Upstream of Incident Case D11A09  80  2000 -r 1750 1500 0) 1250  E 1000 3  O >  Volume  /  70  Occupancy  /  60  /  /  50 40  750 500  »  250  ^ — -  /  /  30 20 10 0  0 60  120  180  240  300 360 Time (sec)  420  430  540  600  b) Downstream of Incident Case D11A09  80  Volume (smoothed)  70  Volume  60  O c c u p a n c y (smoothed)  50  Occupancy  40 30 20 10 0  240  600  300 360 Tim e ( s e c )  c) Upstream of Incident Case U31A11  2000  80  1750  70  1500  60  0,1250  volume  |l000  40 §•  £ 750  308 20  500 250  10  0  0 60  120  180  240  300 360 Time (sec)  420  480  540  600  d) Downstream of Incident Case M22A15 (lane#1) T  80  Volume (smoothed)  70  Volume  60  O c c u p a n c y (smoothed)  50 o  Occupancy  240  300 360 Time (sec)  420  480  540  600  Figure 7.8. Some Examples from Detectors' Readings  126  The second example shows the readings from the first downstream detector o f the same incident. It shows both the actual readings and those smoothed with a moving average. Since the incident is very close to the detector (about 110 meters away), it is reasonable to expect the arrival o f the expansion wave in less than half a minute. However, the presence o f a temporary higher volume just before the incident makes it hard to assume so until one or two more periods have passed (particularly when looking at the smoothed curve). This case is a clear example o f the common problem o f identifying the onset o f incident effects  from  downstream signals. The importance o f the smoothing is also evident from this figure, as the amplitude o f the normal fluctuations is comparable to the changes due to the incident. However as a drawback, it can be seen that the smoothed signal both lags and is damped by the time the expansion wave has arrived at the detector.  The next case shows an example where it is even harder to decide about the onset o f arrival o f the shock wave. The incident has occurred close to the upstream at 11 a.m. A t t=360 seconds there is a substantial decrease in volume that is an indication o f a shock wave. However, as conflicting evidence, the occupancy has also decreased. It is not until after the following period that the occupancy increases substantially, which is also coincident with a moderate increase in volume. Only at t=420 seconds the volume and occupancy have both changed in the expected (i.e., opposite) direction at the same time. This has been probably caused by a random decline o f traffic volume just prior to the arrival o f the shock wave. N o matter what the cause is, it is very difficult to decide when to declare the pattern as incident.  127  The last example shows an unusual case where a "legitimate" false alarm is almost unavoidable. The readings are from a downstream detector that is located in the first lane when an incident occurs some 330 meters away in the second lane. The volume and occupancy readings at t=270 seconds are unusually low, and further during the last 30 seconds before t=300 no vehicle has passed the detector. This causes even the smoothed curve to become very low and signals a false alarm. Another surprising change is also observable long after the incident has occurred (and continued). The volume and occupancy at t=510 and 540 seconds reach the same levels as before the incident as i f the incident has been cleared.  These examples show that defining the first time period when the effects o f incidents are observed is a subjective and nontrivial task . Therefore, although it was carried out for all o f 45  the upstream detector patterns, its result was not used in re-labeling o f the patterns. Rather, it was used to estimate limits for detection rate and average detection time. These limits can not be passed by other A I D methods that work based on capturing the shock wave. A visual check o f the " X X X A X X " series shows that in only 86-89% o f the cases the effects will be sensed on the upstream detector within the first five minutes after their occurrence. The average time that is necessary for the shock wave o f these incidents to reach the upstream  This is even harder to do in case of real traffic data. Obviously the better the boundaries of the classes are defined the better the expected performance of the network would be. However, as long as the decision is made in a consistent way, it should not have a significant adverse effect on the performance. This is clear both from the results of the tests done in this work and from the historically proven performance of the neural networks when dealing with noise and missing data. An additional important factor to consider is that in the problem at hand for each classifier there are only two classes the bias toward each can be adjusted by thresholds in the decision making block.  128  detectors is estimated to be about 2.1-2.5 minutes. These figures give very good indications on the performance bound, as will be seen in section 7.2.5.  7.2.5 Testing Other AID Methods and Intermediate Results Although the main purpose o f testing at this stage was to study the feasibility o f the idea, a general comparison o f the performance o f other selected A I D systems with same data also provides useful information. In this section after discussing strategies used to calibrate other methods, the intermediate results will be presented.  Calibration o f California #7 and Claifornia#8 methods require adjustment o f three and five thresholds respectively. After trying a number o f suggested values from the user guideline (Payne and Knobel, 1976), it was observed that California#8 that has a check for compression waves was consistently outperforming the California#7. This means that for similar level o f A D T the F A R for the California#7 was higher. Therefore, the rest o f the fine-tuning was only continued with California#8.  To find the best performance possible by the California#8 method, special effort was put into fine-tuning of this method. The three performance measures have to be prioritized; the key measure chosen as was discussed earlier is F A R . Then given an acceptable range or value for F A R , the adjustment o f the threshold should continue until the best values for D R and A D T are found. Often this fine-tuning means finding the values such that any small change in one direction increases the F A R , while change in the other direction either doesn't improve the  129  other two performance measures or deteriorates them. Therefore, the following strategy for fine-tuning of the California#8 was used: 1. Select an acceptable number o f false alarms as the lower bound. 2. Use the suggested values for the thresholds 3. R u n the program for all o f the data sets and continue changing (tightening) the thresholds to arrive at the selected level of F A R . 4. Change each o f the thresholds in the direction o f relaxing the conditions implied by the threshold and rerun the program. 46  5. Each time after step 4, recheck the previous thresholds for possibility o f further relaxing the threshold and retaining the same level o f F A R . 6. Increase the number o f acceptable false alarms i f necessary and repeat from step 3.  Following the above steps one can find the best performance possible for the given set o f data. However, these values may be thought o f as the limiting upper bound for the performance. This is because everything was adjusted knowing the feature values for all o f the patterns. There is no guarantee that the same thresholds will be the best when new patterns are checked. Therefore during calibration, rather than a fixed number o f false alarms from a limited (even though large) number o f cases, one needs to use an expected F A R from the recorded data at hand.  Sometimes this means increasing the value and sometimes decreasing it. This can be decided either by examining the related condition in the decision tree or simply by trial and error. Since in this method the changes in performance measures are a monotonic function of the thresholds, therefore this strategy is very easy to be used for California methods. 4 6  130  A similar calibrating and testing approach was taken for the McMaster method. After a number o f trials o f the original proposed logic (Gall and Hall, 1989) using the calibration process suggested in (Persaud et al, 1990), the modified template from a later paper (Hall et al, 1993) was built and tested. Since the latter showed better performance for the test cases, the rest o f the fine-tuning was only pursued with the modified template . 47  The calibration process for the McMaster algorithm consists o f defining the mentioned template in the volume-occupancy space as shown in Figure 3.3. This template has four areas, two o f which are divided in turn into two sub-areas. T o calibrate the McMaster algorithm, one has to define the boundary curve and lines between these areas and sub-areas. The main curve is the parabolic line (with three parameters) that defines the lower bound o f the uncongested operations. Generally, the performance is very sensitive to the parameters o f the equation o f this curve. The calibration was carried out following the same general strategy while using three periods o f persistence check as suggested in the literature.  In both methods, a station specific calibration in which calibration is done for each station or zone independently is expected to enhance the results. After a number o f trials, it was observed that the D R for both methods might only increase less than 1% while the A D T may decrease by about 5% and 10-15% for California#8 and McMaster methods, respectively. There are two versions for the modified template as shown in Figure 3.3-a and b. The first one is used for a normal station while the second one is suggested for stations affected by recurrent congestion. In the study only the first template was used since the way to define curve LQDF was not clear for the author. However, the 4 7  131  However, the performance measures presented here are based on a global calibration to make the condition the same as for the proposed system that only uses one set o f thresholds.  Before discussing the performance measures reported in this section, the following points should be noticed: •  The detection rate is calculated over the first five minutes after the onset o f incident.  •  When counting false alarms for the proposed system at this stage only the decisions before the occurrence o f the incident in the same zone were considered . 48  For other methods however, the alarms before the occurrence o f the incident or those after its occurrence but on the downstream part o f the incident location were counted . In both cases, the denominator represented only non-incident decisions. 49  The A D T and D R o f California #8, and McMaster algorithms along with those o f the first version o f U B C prototype have been plotted as a function o f false alarm rate. Figure 7.9 shows these plots for the " X X X A X X " series o f data. T w o lines are plotted for U B C results in each graph. The lines marked as " U B C , U " shows the results for a condition that only  author does not expect this to make more than a minor difference in the tests performed here considering that only in a small number of test cases recurrent congestion may occur around station#3. This was because at this stage there was no mechanism to avoid the alarms from propagating to neighboring zones. Since these methods seems to be more vulnerable to false alarms in upstream of the incidents, this was done to make the resulted performance measure more comparable with those of the proposed method at this stage. However, complete comparison should only be made on the results presented in Chapter 8 where all of the conditions have been applied equally to the final version of UBC AID system and the other methods. 4 8  4 9  132  upstream classifier is used and the other line represents the results when both classifiers are used. The former is plotted to be compared with the performance bound found visually.  25 2 1.5 Q <  —  —  —  .  — 1  California#S —McMaster _UBC, U UBC, U&D  0 5 0 0.05  0.1 FAR  100  0.15  0.2  (%)  —  95  £ K Q  90  —  —  —  .  — -  California#8 —McMaster -UBC, U UBC, U&D  85 4-  80 0.05  0.1  0.15  0.2  FAR(%)  Figure 7.9. O p e r a t i n g Characteristic Curves of V a r i o u s A I D methods for the Simulated D a t a - " X X X A X X " Series, L a n e Closure Incidents  The A D T for the upstream classifier lies in the range that was found by visual check o f the time taken for the effects o f the shock wave to be sensed at the upstream detector location (2.1-2.5 min). In other words, the time spent by the system to be convinced that the changes 133  in the upstream parameters are due to an incident is almost zero. O n the other hand, when using the downstream classifier as well, the A D T is about 0.6 min below the bound found by visual check. This implies that the idea o f utilizing the reliable portion o f the downstream information to detect incidents before their congestion reaches the nearest station is indeed feasible.  The plots for average detection time also shows that California #8 and McMaster algorithms provide very close results. It is also easy to see that there is about half a minute difference between when the presence o f the shock wave has been sensed and when these methods have triggered the alarm. It is possible to reduce this difference by applying the station specific calibration, particularly for the McMaster method, but even though small, there will always be a difference.  The other part o f Figure 7.9 shows the detection rates within the first five minutes. A similar explanation holds for these plots as well. The U B C upstream classifier lies in the 86%~89% range. Introducing the downstream classifier pushes the detection rates up by about 10%. This is because the shock wave o f these cases takes more than five minutes to reach the upstream detector station while the expansion wave that has reached the downstream station was strong enough to trigger the alarm in the U B C system. The California#8 and McMaster methods show close results with one another, and with the bound found by the visual check.  These graphs show the performances for cases where the incident has blocked one lane o f traffic. However, the incidents from " X X X B X X " and " X X X C X X " series that represent less  134  severe incidents were also tested with all o f the methods as well. The results were similar to those o f " X X X A X X " series in the sense that only the proposed system could pass the bound found visually but with a smaller margin. This on one hand confirms the feasibility o f the idea for the less severe incidents and on the other hand, however, it shows that the expected contribution o f the downstream classifier will fade away for less severe incidents. This is because the more severe the incident is, the stronger the changes due to arrival o f its expansion wave will be. Fortunately, traffic management authorities obviously put a higher priority on early detection o f more severe incidents.  7.3 Final Form of the UBC AID System The  basic version o f the proposed system only showed the general feasibility o f using  expansion waves independently as indicators o f incident to detect them more quickly. It was only tested in the same zone as the incident because there was no mechanism to avoid alarms when the incident was in the neighboring zones. Further computer experiments also showed that a number o f modifications are necessary to enable the system to use the detector signals efficiently and maintain its performance under various conditions. Therefore, after a number o f trials the final form of U B C A I D system was designed and tested.  Figure 7.10 shows the proposed U B C ADD system in the final form. The modifications made to each o f the three stages are discussed in the following sections and the results will be discussed in the following chapter.  135  7.3.1 Preprocessing As shown earlier by the histograms in Figure 2.2, to provide an opportunity for making reliable decisions, control variables based on which the decisions are made should be selected as distinguishable as possible for incident and no-incident conditions. To increase the reliability at least two groups of problems need to be overcome: •  The temporary fluctuations present in the control variables.  •  The variation present in the control variables as a function of time or location of the detectors.  136  The first problem was addressed in the first version o f the proposed system by smoothing the downstream signals. This would reduce the width o f the bell shape histograms and thus shrink the overlap region. In the case o f upstream signals, this was not necessary because the histograms are farther apart and therefore the benefit does not justify the inherent delay that comes with smoothing.  The second problem refers to what could be called 'robustness", that is the ability to maintain the reliability for a wide range o f operating conditions. I f one considers the incident-free histogram for a certain detector in a short period (e.g., 15 minutes), the mean would be different when considering the same histogram in another period. In both cases however, a small standard deviation is expected for such a histogram. Obviously, the histogram drawn for the same location for the period o f one day, would show a mean that is somewhere in between the typical means during that day. However, the standard deviation would be many times larger than the former ones because o f the variations during the day. The similar widening o f the histogram is expected i f one considers various locations. In other words, when the system is showing good performance in one zone, it could produce an unnecessarily high number o f false alarms at some other zones or miss incidents somewhere else. There are two ways to solve the problem: •  T o use time and location specific calibration.  •  T o select control variables such that they are as independent as possible from time and location.  137  The first solution has been suggested and used by many researchers. However, this not only requires much more effort to be put into calibration, it also requires incident data for all locations and times that are not always available. This would be more o f a problem for methods that use statistical means or neural networks. The neural networks owe much o f their ability to handle noise to the large amount o f representative data exposed to them during the training process. When a sufficiently representative database exists or in the absence o f other alternatives, this might be a path to follow.  The second solution requires selection o f the control variables such that they are affected and adapted to the changes o f traffic variables. In first version, the traffic parameters were used directly (although smoothed) as input features to the networks. In the final form o f the U B C A I D system however, the ratios o f traffic variables to their estimated normal values have been used as the input features.  The estimated normal values should represent, as close as possible, the expected values o f the traffic variable being measured by the specific detector for that specific time under nonincident condition . One way to estimate these is to average the historical values for the 50  location and time o f day. This method has been used by some researchers mainly to replace the measured values in case o f a faulty reading by a sensor. This method, although an option, is not necessarily a good one because a number o f factors can substantially and adversely affect its expected reliability. Among such factors, one may consider the effects o f the day o f the week, day o f the year, special occasions, and weather conditions. This is clearer when one  138  notices that about one false alarm in a thousand decisions is all that can be allowed. A better estimation would be one whose value is updated along with each measurement. In this study, a moving average o f the last 15 minutes (updated every 30 seconds) was used as an estimation of the normal values.  The flow o f information in the preprocessing stage can be easily followed from Figure 7.10. The signals from each o f the detectors are first stored in a database. The signals are then fed to a smoothing stage in which a one-minute moving average is used to smooth the signals. B o t h the smoothed and non-smoothed  51  signals along with the normal values calculated from  the data contained in the database will be fed to the next block. In this block, for each zone the input features o f the two classifiers are calculated by dividing the selected smoothed and non-smoothed signals by their respective normal values. The outcomes o f this block are two sets o f non-dimensional numbers whose values should be close to unity for normal conditions. In cases where the sensory measurements are missing or not valid, it is also easy to substitute a value o f unity for the missing features. It should be noticed that this process is in addition and not related to the process o f "normalization" or "scaling" o f input features for most neural networks that is for a different purpose.  This includes recurrent congestion. Obviously for every detector station the smoothed signals are used when the decisions for the upstream zone is being made and non-smoothed values are used when the downstream zone is considered. Please notice the difference between downstream zone and downstream detector. 51  139  Among the eight possible data candidates to be fed to the classifiers, in a two-lane highway, only six have been used for each classifier. The selection o f the signals for each classifier is discussed in the following section.  7.3.2 Classifiers The classifiers in the final form o f the proposed system are different from those o f the basic version in the number o f input features, in the training patterns, and in the size (as a result o f the former). The differences and the reasons for them are discussed in this section.  Among the four measured traffic parameters used in the U B C A I D system, not every one carries the same information content as to whether or not there is an incident. One could either use all o f the available features or select the ones that are truly being used and put aside the ones that are providing little information to increase the efficiency o f the computations. In some cases, selection o f more than enough features may also indirectly have an adverse effect on the results in some instances. This is because with an increase in the number o f inputs the number o f dimensions in the pattern space is increased. If not every part o f such a space is adequately represented by the training patterns the outcome o f the network for patterns close to that neglected part in the recall stage could be very wrong. O n the other hand, eliminating an input that may contain the necessary information, even i f it is in some o f the cases, would deteriorate the resulted performance.  140  Genetic Algorithms ( G A ) have been used to both find the optimal set o f inputs and to design neural networks. Genetic algorithms, inspired by the evolution and survival o f the fittest provide optimization tools for many problems where conventional techniques are not applicable. G A is different from traditional optimization methods mainly because it works with coded parameters and it starts from a random population o f candidate solutions. Every individual o f this initial population, or as it is called "first generation", is represented by a numerical string called "chromosome". T o create the next generation, first, pairs o f individuals are selected and proportionally represented based on their performance, or "fitness" as termed in G A . The selected parents reproduce the new individuals that share partially the genetic information o f each o f their parents. This process is done through "crossover" in which the new chromosomes are derived from splits o f the parents' chromosomes. With an often-low probability some o f the chromosomes get "mutated" in which a "gene" is randomly changed. The process is repeated for the next generation while it is expected that each generation brings about better solutions or "fitter individuals". Although random selection has an essential role in this method, the information as to what the best solution should be is transferred and reinforced through higher probability that the "fitter" has to be genetically represented in the next generation. Comprehensive details o f G A are presented by Goldberg (1989).  The full deployment o f G A to design optimal neural networks and their inputs required extensive computational power and time. However, G A was used for subsets o f data and the results showed the relative importance o f the input features for each case. Based on the results o f these experiments, the following parameters were selected as the input features: •  Speed, occupancy, and volume for detection o f shock waves.  141  •  Headway, volume, and occupancy for detection o f expansion waves.  These selections make sense noting the different role traffic parameters may play during detection o f shock and expansion waves. A s discussed earlier, there is some redundancy in the information carried by volume and headway signals. When detecting shock waves, speed, and occupancy are primary features while volume and headway are o f less importance and only one o f them need to be used. O n the other hand, while detecting expansion waves the headway and volume are as important or more important than occupancy while speed is not important. The latter because speed barely changes as a result o f an expansion wave as shown in Figure 7.2-d. The speed o f the vehicles after passing an incident would not be higher than speed o f vehicles prior to the incident unless there has been some congestion prior to the occurrence o f the incident.  The other difference between the first version and the final form o f the proposed system lies in the training patterns to fix a problem with the former. A s stated earlier the tests o f the first version were done only in the same zone as o f the incident. F o r example i f one considers only the upstream classifier as trained in the first version, after the alarm is triggered in the zone o f incident, it may also be triggered for the neighboring upstream and downstream zones. The alarms in the neighboring upstream zones are caused by natural progression o f the shock wave to the upstream and will be discussed in the following section. However, the alarms in the downstream are caused as a result o f confusion o f the shock wave and expansion wave by the upstream classifier.  142  It was mentioned that during the training process o f a neural network i f the pattern space is not adequately represented by the training patterns, the output o f the network for patterns not represented could be very different from expected output. This particularly applies to the portions o f such a space that require the network to extrapolate. Representing only normal and after shock traffic patterns to the upstream classifier and labeling them, as two classes would leave the condition o f after arrival o f the expansion wave at the mercy o f the network. Therefore, in some cases the condition after arrival o f expansion wave may appear closer to that o f after arrival o f shock wave rather than normal traffic.  When training the networks the training patterns are divided into three groups as follows: A . Normal traffic B . After shock wave C. After expansion wave  To train the first classifier, The first and third groups ( A & C ) o f data are used as the first class (absence o f shock wave) and second group (B) is used as the second class (presence o f shock wave). T o better reflect the role o f the classifier its name is changed to "Shock Wave Detector" as shown in Figure 7.10. The second classifier named as "Expansion Wave Detector" is trained in a similar way. In this case, the first and the second groups ( A & B ) are used as first class (absence o f expansion wave) and the third group (C) is used as the second class (presence o f the expansion wave). In both cases the output for the first class is selected as " 0 " and for the second class as " 1 " .  143  7.3.3 Decision Making Block T h e decision m a k i n g b l o c k s h o w n in F i g u r e 7.10, has the same role as it d i d in the first version but w i t h some added conditions. T h e s e conditions are necessary to filter out as many false alarms as possible  while  enabling the  user to  set  the  thresholds  loose  enough  to  gain  advantage o f the early indications o f incidents.  A s mentioned in the previous section, the p r o b l e m encountered w h e n using the basic f o r m o f the p r o p o s e d system was having false alarms in the neighboring zones o f where incidents had o c c u r r e d . T h e s e false alarms c o u l d be divided into the f o l l o w i n g t w o groups: •  False alarms due to confusion o f w a v e s b y the classifiers  (shocks detected i n  d o w n s t r e a m zones o r expansion w a v e s detected i n upstream zones).  •  False alarms due to  progression  o f w a v e s to  the  neighboring zones  (shocks  detected o n the upstream zones o r expansion w a v e s detected i n the d o w n s t r e a m zones).  T h e main causes for the first g r o u p o f false alarms were the training patterns and as discussed in the previous section these can be easily fixed b y p r o p e r selection o f the training patterns. H o w e v e r , the second g r o u p o f false alarms, as will be s h o w n here, is a natural b y - p r o d u c t o f using an " O R " gate.  144  This problem could be explained using the typical incident shown in Figure 7.4. A close look at this figure shows that the expansion wave arrives at stations #4 and #5 roughly about 0.5 and 1.5 minutes after the occurrence o f the incident. It also shows that the shock wave arrives at stations #3 and #2 roughly about 1.5 and 4:5 minutes after the onset o f the incident. Therefore assuming that the specified thresholds are passed at these times, and using an " O R " gate between the outputs o f the two classifiers, the incident would be detected in 0.5 minutes in zone#3. However, 1.5 minutes after the onset o f the incident there would be a false alarm in zone #4 because o f the output o f the expansion wave detector for station #4. Similarly, 3 minutes later there would be another false alarm in Zone #2 because o f the output o f the shock wave detector for station #2. Both o f the false alarms could be simply avoided by using an " A N D " gate, but then the detection o f incident would have been delayed by one minute.  Since the core concept o f the proposed system was to save on detection time by using the first evidence that is strong enough and not to wait for further indications, using " O R " gate is essential. This however, could be temporarily changed to avoid the false alarms caused by progression o f waves. This requires a condition to switch the " O R " gate to an " A N D " gate after the source o f an alarm has been verified by the operator. This switch needs to be done for a limited neighborhood and maintained active for some time to ensure that it covers the progression o f the waves.  The switching only affects the detection time for secondary incidents that may occur in the neighborhood o f the first incident. Although this at first may seem a disadvantage but even using the " A N D " gate the U B C method should achieve on average about the same detection  145  time as the other methods. This is clearer considering that higher speed o f the expansion wave allows them to reach the detector station sooner. M o s t importantly, the effects on the overall A D T will be insignificant, because after all, incidents are rare events and even more so are secondary incidents.  Other than the two groups o f false alarms discussed above there are at least another group o f them that require adding a number o f if-conditions to the decision making block to avoid them. These false alarms occur mainly in, but are not limited to, cases where an expansion wave is not detected in an incident zone and its progression causes an alarm in the downstream zone . Considerable time and effort were spent on analyzing and finding 52  mechanisms to avoid this and similar types o f false alarms.  A n examination o f the cases with this false alarm shows how they could happen. Given the random nature o f traffic it is always possible that an expansion wave could be weakened temporarily by normal fluctuations. O n the other hand, it is also possible that the effects o f the same wave are strengthened by normal fluctuations when it arrives at the next station. The combination o f these two events is not very likely but still may cause an unacceptable level o f false alarm rate. This becomes clearer when noticing that about one mistake in every thousand decisions is all that is allowed when it comes to triggering the alarm. It should also be mentioned that although in majority o f cases the source o f the expansion wave was an  Obviously if the detection first occurs in the incident zone, the alarm would be verified and consequently the switching to "AND" gate would be applied and false alarm would be avoided.  146  incident, in the rest o f them the expansion wave was there for no apparent reason other than normal disturbances present in traffic.  A n increase in the threshold level may seem a solution at first, but experiments showed that doing so to bring the F A R to an acceptable level, would considerably reduce or even eliminate the advantage expected by using U B C A I D method. After a number o f trials a double threshold approach proved to be an effective way o f avoiding most o f these false alarms with minimal effects on A D T or D R . In this approach, a higher threshold f r  H  E  represents the  value above which there is enough certainty that an expansion wave exists. The other (lower) threshold f  L  E  represents the value under which there is enough certainty that either normal  condition or a shock wave exists. The values between these two thresholds represent a gray zone where none o f the above statements can be made with adequate certainty. T o attribute an expansion wave to an incident in zone /', both o f the following conditions must be met  EWDO >Tu, M  a n d E  EWDOi<T ,E L  Where,  EWDOi  =  EWDOM  Output o f expansion wave detector for station / (upstream o f zone /) ^Output o f expansion wave detector for station i+1 (downstream o f  zone /)  The first condition is similar to that used for the basic version while the second one ensures that no expansion wave is present in the upstream station. This is because after the occurrence 147  o f an incident, traffic parameters only should be at about either normal or congested levels in the upstream o f incident. Any indication o f the existence o f an expansion wave upstream o f the zone would cause the rejection o f a potential alarm. It can be clearly seen that a double standard is effectively used when dealing with expansion waves on each side o f the zone being examined.  A similar approach could also be substituted for the single threshold used in the basic version for shock waves. The two conditions to be met for the shock wave detector outputs are: SWDO,>TH,  A  N  D  S  SWDO <T , M  L  S  Where,  SWDOi  =  SWDOM  TH S  Output o f shock wave detector for station i  "Output o f shock wave detector for station / + /  TLS  =  High and low thresholds for shock wave detector respectively  The above measure is also helpful when considering false alarms caused by the slowly developing compression waves and recurrent congestion. Although the latter is mainly taken care o f by using the expected normal values, there are cases in which the above measure reduces F A R .  Then an " O R " gate is used between the results o f above conditions unless the condition for switching the gates is active in the zone. It may appear that by using the above conditions the  148  switching would not be necessary, because the alarms should not be caused by progression o f waves anyway. However, experiments showed that, depending on the threshold, this is not always true and some cases escape checks (mostly in the downstream direction). Experiments also showed that it would be more beneficial to use A N D gate than select tighter thresholds. This is easy to understand when considering that any change in threshold has some, even i f low, cost while effects o f using an " A N D " gate only occur in very rare occasions.  It should be emphasized again that due to the random nature o f the traffic no matter what conditions are selected there would always be extreme cases o f false alarm and/or missed detection opportunities. The above measures were aimed at avoiding false alarms with higher probability or frequency o f occurrence. It was also tried to keep the number o f thresholds or parameters to a minimum. Consequently, the proposed system only requires two pairs o f thresholds to be set. These thresholds need to be set after the classifiers are trained with a diverse set o f training patterns. Since each classifier works independently the two pairs o f shock and expansion wave thresholds act independently. This allows the user to set the thresholds by examining the expected performance measures while changing them in pairs.  149  CHAPTER 8 DISCUSSION O F RESULTS  In this chapter the results o f the final form o f the proposed A I D system is presented. First the results will be compared to those o f the California#8 and McMaster method for the second set of data series to show the advantages o f the proposed system. Later, robustness o f the proposed method will be discussed when more detailed results are presented.  8.1 Comparison of Performances Because o f the multi-criteria nature, the performance o f the A I D systems can be compared in various ways. Individual figures that are sometimes published in the papers only reflect instances o f the performance and are not very useful. A better way often adopted in the literature is to draw operating characteristics as discussed earlier. It consists o f drawing detection rate and average detection time as a function o f false alarm rate. In this section after presenting the characteristic curves, new ways o f comparing the performance will be explored in which a different insight will be gained.  The detection rate and average detection time are shown in Figure 8.1 as a function o f false alarm rate for the U B C and other two methods. A n important point to consider when interpreting the results is that in these diagrams, the F A R is used in a pseudo-independent variable sense. The calibration parameters and thresholds are the real independent variables  150  that are set by the user. The values o f F A R as well as D R and A D T will be calculated for each run o f data. The critical role o f the F A R puts it in the abscissa o f the diagrams, but it is possible to arrive at more than one value o f D R or A D T for a single F A R . This is o f course, because there are more than one threshold or parameter involved. Therefore the operating characteristics curves show roughly an "envelop" for the  achieved performances  by  connecting the best results in each case.  100 95 90  Q  80  -  - - California#8  —  —McMaster UBC  75 70  0  0.03  0.06 0.09 FAR (%)  0.12  0.15  0.12  0.151  4 3.5 3 c  2.5  *  2  <  1.5  -  - - California#8  1  —  —McMaster  0.5  UBC  0 0.03  0.06  0.09 FAR(%)  Figure 8.1. Comparison of the DR and ADT as a Function of FAR (XXXAXX) 151  The top part o f Figure 8.1 shows the detection rate after 10 minutes from onset o f incident for the range o f F A R that is o f interest. In this range o f F A R , the California#8 has achieved a higher detection rate than McMaster and U B C methods. A visual check o f the signals shows that in about 97% o f the cases the shock wave has reached the upstream detector station within the 10-minute time window while the rest has left no significant effect on signals. Therefore, the California#8 has detected almost all o f the detectable incidents while U B C method has detected 2~4% less incidents. The seemingly inferior results o f U B C method may appear inconsistent with those presented for the basic version. However, the answer to why there is such an inconsistency and why it is o f no significance will be discussed later.  The A D T o f the incidents detected within the 10-minute window is presented in the bottom part o f Figure 8.1. In sharp contrast to the diagrams for detection rate, when considering the A D T , the U B C ADD has significantly outperformed the other methods. The A D T found for other California#8 and McMaster are 4 0 - 1 0 0 % higher than those o f U B C method depending on the F A R and method. A visual check similar to what was done for the first series o f data sets shows that the congestion caused by the incidents takes about 3 minutes to reach the upstream station. This shows again that effective use o f information carried by expansion waves enables the U B C system to surpass what is a limiting bound for other methods.  The operating characteristic curves are informative but do not provide a full picture for a comparison. I f A I D was the sole means o f detection, then these curves were more useful but given that there are other means o f detection as well, the D R curves could be misleading. The  152  value o f the D R does not represent the contribution o f the A I D component in the whole detection system. This is clearer considering that the values o f D R and A D T are defined for a specified period o f time after the occurrence o f the incident, but their dependence is not observed in the operating characteristic curves. During most o f the daytime, any lane-blocking incident is expected to cause a disruption strong enough to trigger the alarm o f A I D methods at some point. What is important then is i f a high number o f such cases are detected first by that A I D system to justify its use.  To include the time element and compare the results in a more meaningful way, use o f different diagrams is suggested here to provide insight into the history o f detection. First, Figure 8.2 shows the number o f cases detected for each time period after the occurrence o f the incident is drawn for each o f the three methods. In this chart and the next two charts , a 53  fixed F A R , about 0.1% in this case, are used to make the curves comparable. Since finding exactly the same number o f false alarms, when considering the 54000 decisions involved, is very time consuming, close enough F A R have been used here . It is very clear that most o f 54  the incidents detected by the U B C A I D method are detected in the second and third timeperiod (i.e., 1-1.5 minutes after the onset o f incident). In the other methods, the detection starts from third and fourth time period and continues for next 6-10 time periods as the presence o f the shock waves are sensed. The decision tree in California#8 is such that one time period is essentially used as a persistence check, while in McMaster, an observation o f  Figure 8.3 and Figure 8.4. The actual vales are 0.106%, 0.115%, 0.102% for California#8, McMaster, and UBC methods respectively. Although none of the systems is sensitive to such a small difference in this range of FAR, the FAR for UBC was intentionally selected to be lower than the others to remove any doubt. 5 3  5 4  153  congestion is ignored unless the condition remains congested for the next two periods. This causes a time lag o f one time period between the curves o f M c M a s t e r  55  compared to that o f  j  o ,—,  ,  o  :s Detected  o  California#8 that can be easily observed.  -  c 60 4 •a o £ 40 o  i  |  ivicivtas ler  # • i  V  ]  f «  1 0  —  - Caifornia#8  l  20 0  ^^  -  1  2  *  I— i — ~  ^ r  3  6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Time Period (After Incident)  4  5  '.  i  S  F i g u r e 8.2. C o m p a r i s o n of the n u m b e r of Incidents Detected as a F u n c t i o n of T i m e Elapsed After the Onset of Incident ( X X X A X X e )  Figure 8.3 presents the history o f the A D T for the three methods and shows the dependence of the A D T values to how long the process has been considered. A s seen in this figure, eventually the curves will asymptotically reach some value, but before then, the A D T depends on where the detection has stopped. This also explains some apparent differences between the  Experiments with first series of data sets suggested that McMaster method would benefit from using at least one less persistence check. Removing the persistence check would increase the FAR, but this could be compensated by changing the calibration parameters and ending up with same FAR and better ADT (and sometimes DR).  154  comparison results presented in section 7.2.5 and those presented in this section as the former were calculated for the first five minutes after the onset of incidents.  4  T -  T i m e A f t e r O n s e t o f Incident (min)  Figure 8.3. Comparison of the ADT as a Function of Time Elapsed after the Onset of Incident (XXXAXXe)  The detection rate history for a given FAR as shown in Figure 8.4 is suggested by the author to make the most meaningful comparison between the AID methods. If drawn for various FAR it could be used as the single most informative picture of the AID system performance. As discussed in section 3.3 since many of the incidents will be detected through other means before they are detected by the AID system, the detection rate is only important when calculated for the first few minutes.  155  The Figure 8.4 clearly shows that about three-quarters o f the incident have been detected by U B C A I D within a minute and a half after their occurrence (much before the congestion is sensed at upstream station). This is while on average it has taken about three more minutes for the other methods to reach the same level o f detection. It is also easy to consider an example in which, as suggested by the traffic management authorities in Seattle, the cellular calls would be made within the first 2-3 minutes. While the majority o f incidents are detected by this time through U B C method, other methods have detected 3 0 - 6 0 % less cases by this time.  Time After Onset of Incident (min)  Figure 8.4. Comparison of the DR as a Function of Time Elapsed after the Onset of Incident (XXXAXXe)  156  The Figure 8.4 also clarifies the apparent difference between the performances presented in section 7.2.5 and those presented earlier in this section. It is easy to see that for the first 7-8 minutes the U B C method leads the other methods while after that it marginally lags the others. From the appearance o f the curves, it is reasonable to assume that using any data beyond ten minutes would not significantly change the detection rates. Given the above discussions about the role o f other means o f detection, it is reasonable to assume that the final D R do not represent the degree o f contribution o f the A I D systems to the whole detection system. Therefore, on the contrary to the D R values for the first few minutes, the final values are o f no significance.  8.2 Detailed Performances In this section, more detailed results will be presented. They would provide a better picture o f the three performance measures of U B C - A I D system as a function of time and location.  In all o f the charts presented in this section the distribution o f performance measures were calculated with the set o f thresholds that caused a F A R o f about  56  0.1% when calculated for  all o f the data series. The reason for this selection was to be able to include as many false alarms while remaining within the desired range o f false alarm rates. Higher numbers o f false alarms are needed to make the values presented for each hour or zone in F A R charts more meaningful. T o maintain a consistency, the same set o f thresholds was used for all the rest o f charts.  157  In Top chart o f Figure 8.5 the Detection Rate as a function o f incident zone and lane is presented along with the average values. Each column represents 27 incident cases per lane. It shows that D R does not change much with incident zone. The detection rate in the zone#l and Zone#2 are only slightly less than the average. This is reasonable considering that as shown in Figure 5.2, 10 out o f 12 incident locations in these zones carry the least volume, which at 11 a.m. is lower than 1000 vehicle/hour/lane.  The middle chart o f Figure 8.5 similarly shows the variations o f D R as a function o f incident location within the zones. The labels " U " , " D " , and " M " as shown in Figure 5.2 represents incident locations closer to the upstream station, downstream station, and the middle o f two stations respectively. Each column represents 45 incident cases per lane. A s expected, no significant difference can be observed among the values calculated for each column.  The actual value is 0.102%. Please see footnote 54 and Figure 8.1.  158  100  -p  -=• 80 4I Lane#1  • TO 60  ^Lane#2  DC  J  40  S  20  o  -Average  Incident Zone 100 £  80 60  =  40  l  iLane#1  l  |Lane#2  H ^ M A V& ra g  6  20  U  M  D  L o c a t i o n of I n c i d e n t W ithin a Z o n e 100  r !~ ^  80 -1  I  60  ^Lane#1  DC  ]Lane#2  o 40 4 ** o o o 20  Average  10  11 12 13 Time of day (Hour)  14  15  16  Figure 8.5. Detection Rates as a Function of Incident Zone, Incident Location, and Time of Day  The variation o f the D R with time o f day is shown in the lower chart o f Figure 8.5 where each column represents 15 incident cases per lane. In this chart, the columns corresponding to 11 a.m. show about 15% lower detection rates than the average, while the rest o f the columns have performed either about or above average. This is reasonable considering that at that time, the volume in many segments o f the freeway is very low and therefore no significant effect would be caused by an incident. The increase in the volume, up to a certain point, increases the likelihood o f the incidents being detected virtually by all indirect methods. Consequently, a similarity between this chart and the curve showing the weighted average volumes  57  in Figure  8.6 can easily be seen. The chart in this Figure also shows the volumes for the freeway segments with highest and lowest values. The lowest values in this chart correspond to zone #1 and part o f Zone#2 that falls between Boundary off-ramp and Grandview on-ramp. A s stated above, the incidents that were simulated for 11 a.m. in this segment o f freeway occur in a volume less than 1000 Veh/Hour/lane and have little or no impact on the traffic. These incidents remain virtually undetectable for all o f the indirect methods. Therefore, this value for the lane volume can be considered as the lower bound for the effective operational range as suggested by the results o f the simulation.  A s shown in the charts o f the Figure 8.5, there is no significant difference between the detection rates for various lanes. The actual detection rates for lane#l and #2 are 92.6 and 94.8% respectively. The fact that the D R o f Lane#2 is slightly higher, or as will be shown its  To calculate a representative value for each hour, considering that volumes vary with both time and location, a weighted average value has been calculated. To calculate this value the length of each segment of the freeway with a fixed volume has been used as the weight for its volume. Obviously, the summation of all such values for all segments can be divided by the site length to find the weighted average. 5 7  160  A D T is slightly lower could be attributed to the higher degree o f uniformity that can be observed in the traffic flow o f lane#2.  2500  Weighted Average  01  S2000 - J —J  I  |1500  J  >1000  i  E  | >  500 0  -i 8  9  10  11  12  13  14  15  16  17  T i m e o f Day (Hour)  Figure 8.6. Distribution of Highest, Lowest, and Weighted Average Volumes of the Study Site as a Function of Time of Day  Figure 8.7 shows the distribution o f false alarm rates experienced in various zones  58  and times  o f day. In the top chart, where each column represents 5,400 decisions, there are some minor deviations from the average level o f False alarm rates. While the more perturbed flow o f traffic in the first two zones may have been a contributing factor, the deviations could also be assumed as natural random variations. Furthermore, considering the small number o f false alarms represented in each column compared to the number o f decisions involved, the reliability o f the decisions made can be assumed acceptable.  The zones refer to where the alarm is triggered and not the zone where an incident has been simulated as in other charts.  161  A s opposed to the top chart in Figure 8.7, the bottom chart shows a considerable variation o f false alarm rate with time o f day. Particularly, the F A R at 8 a.m. is about thee times the average level. It should be noted that at this time o f day, a volume o f about 2100 Veh/Hour/Lane is experienced in a segment o f the freeway. A t such a volume, the congestion is often enough to cause frequent compression waves, which can be verified by visual check o f the detectors readings . Adopting the enhancements discussed in section 7.3.1 and 7.3.3 has 59  allowed a considerable drop in the number o f false alarms for this volume while maintaining a very low A D T for the entire range o f volumes and locations. The level o f F A R for 8 a.m., although representing only one false alarm for every 300 decisions, may be too high for operational purposes. Therefore, this volume can be considered as the upper bound for the operational range o f the proposed system at this stage, though some suggestions that will be discussed in section 9.2, may increase this limit. There are a number o f points that should be considered on the implications from an operational point o f view: •  If the data sets representing 8 a.m. (that constitutes one-ninth o f the data sets) is excluded, the average level o f F A R would be about 30% less while the A D T and D R would remain about the same. Therefore, i f the entire range o f times o f day (e.g., 6 a.m. to 10 p.m.) is considered, the average level o f F A R is expected to be lower while the D R and A D T is expected to remain about the same.  It should also be noticed that for the range of FAR level tested (i.e., 0.03%~0.1%), the FAR figures corresponding to 8 a.m. for the other two methods were higher than UBC method. A visual check of the individual signals for the cases that had caused false alarms showed that the flow condition is very unstable, which given the chosen speed (50 miles/hour) is not surprising. Therefore a high number of incident-like patterns can be observed which depending on the method and the selected thresholds may or may not trigger an alarm.  162  During the morning and afternoon rush hours the probability o f having an incident as well as the seriousness o f its effects are higher and therefore a larger number o f operators are expected to work. This in turn may increase the level o f tolerance for false alarms at these times. Although there has been an emphasis on using one set o f thresholds for the entire site and all times o f day, i f the latter point proves to be insufficient, then it is better to use a different set o f thresholds only for the fraction o f time where the volume reaches a certain limit. 0.4 0.35  F A R in e a c h Z o n e  0.3  Average F A R  025 0.2 0.15 0.1 -| 0.05  j  0 Zone#1  1  Zone#2  Zone#3  Zone#4  Zone#5  Alarm Zone  0.4 S« — o «  0.35  -  0.3  -  jHourly F A R -Average F A R  0.25 0.2  -  0.15 0.1  --  •  0.05 0 10  11  12  13  14  15  16  Time of Day (Hour)  Figure 8.7. Distribution of False Alarm Rates Experienced in Various Zones and Times of Day 163  It should be noted that as opposed to D R and A D T that are functions, even though weakly, o f the incident zone, location within each zone and lane, the F A R has no relationship to these factors and any variation o f it should be assumed as a random occurrence. Therefore, the F A R has not been drawn as a function o f these factors.  Figure 8.8 presents the average detection time as a function o f incident zone and location as well as time o f day. A general look shows that the charts in this figure have a higher variation than charts in Figure 8.5. This is reasonable because the D R depends only on the number o f detected incidents at the end o f 10-minutes period irrespective o f how much the detection time was, and therefore the D R should show less variation than A D T .  The top chart in Figure 8.8 shows the variations by zone where each column represents 27 incident cases per lane. The first two zones that have the highest degree o f change in traffic volume as well as the segment in which the lowest volume occurs show an A D T that is higher than average. O n the other hand, zone #5, which has no on-ramp or off-ramp, shows the lowest A D T . In this and the other two charts, variations with incident lanes, although higher than those o f D R because o f the reason mentioned above, are generally o f reasonable degree. O n average, the A D T for incidents in Lane#2 is about 14% lower than A D T for incidents in Lane #1. This is consistent with the D R and the fact that Lane#2 has a more uniform and less disrupted flow o f traffic.  164  3 •  2.5  l  01  £  2  c o  1.5  o  Lane#1 iLane#2  +  Average  1  Q 01 a  0.5  re 01 >  0  <  Zone#1  Zone#2  Zone#3  Zone#4  Zone#5  Incident Z o n e 3  jLane#1 2.5  • Lane#2 -Average  2  o o  •ELS  o Q  0) O)  ra >  0.5  < U  M  D  L o c a t i o n of I n c i d e n t W h i t h i n a Z o n e  3 2.5  I  |Lane#1  I  |Lane#2 Average  Ol  E  2 1.5  u Ol  1  Ol  Q  Ol  0.5  UI  re  >  <  10  11  12  13  14  15  16  T i m e of D a y (Hour)  Figure 8.8. Average Detection Time as a Function of Location and Time of day  165  The A D T for various locations within each zone are presented in the middle chart o f Figure 8.8. Each column represents 45 incident cases per lane. A s expected the " M " location where on average the incident effects have to travel the longest distance has the highest A D T . The incidents which occurred closer to the downstream station have the lowest level o f A D T .  The bottom chart in Figure 8.8 shows variations o f A D T with time o f day where each bar represents 15 incident cases per lane. When compared to the bottom chart in Figure 8.5, it shows a consistency in the fact that performance at 11 a.m. is at its lowest and it generally improves when volume increases. This is an additional evidence that while the volumes as l o w as 1000 Veh./hour/lane can be handled by the proposed system, lower volumes may cause a considerable increase in A D T and/or decrease in D R .  Another perspective can be attained by analyzing the results based on the contributions o f the two classifiers . T o do this, the two charts o f Figure 8.9 are presented in which each pair o f 60  bars represent 90 incidents to be detected. The top chart o f this figure shows the number o f incidents detected by each o f the two classifiers. Whenever the detection o f an incident has been triggered by both classifiers at the same time, that incident has been counted as half for each classifier. It shows that while on general, a higher number o f incidents have been detected by E W D , the number o f detected incidents by both E W D and S W D increases as the location moves closer to the respective detector station (i.e., the upstream station for S W D and vice versa). This is obviously as expected as it is to observe that a much larger share o f incidents detected at location " M " are detect by the E W D .  166  90 egssaBv SWD  80  I  70  | By EWD •  60  A v e r a g e  50  o l 0)  40 Q  E 3  30 20 10 0 U  M  D  Location of Incident Within a Zone 4 O  E c  o  33  S £  o E Q m Ol  | By S W D  3.5 I  3  | By EWD »  2.5  A v e r a g e  2 1.5 1  >  0.5  <  0 U  M  D  Location of Incident Within a Zone  Figure 8.9. Number of Incidents Detected and Their ADT for Each Classifier and Location within a Zone  Similarly, the bottom chart of Figure 8.9 shows the ADT as a function of location and classifier by which they were detected. When averaging for each classifier, the detection time 61  of incidents whose alarms were triggered by both classifiers at the same time has been weighted down by half for each classifier. Again, this chart shows that the EWD has been  6 0  The Shock Wave Detector (SWD) and Expansion Wave Detector (EWD)  167  more successful in achieving a lower A D T . It is also clear that the A D T for various zones when detected by E W D increase slightly with distance from downstream station. This is while in case o f S W D , there is a major difference between the values representing locations " M " and " U " . The reason for both o f these statements are the higher speed o f the expansion waves compared to often much lower speed o f the shock waves. The A D T s achieved by E W D  6 2  is  more than 2 5 % lower than those obtained by S W D .  These charts provide further evidence that an independent exploitation o f the information carried by the expansion wave has a significant effect on the performance o f the A I D system. It is clearly shown that usage o f E W D will increase the likelihood o f detecting the incident even before the smallest effects have been sensed by the upstream station. This is in addition to the fact that the proposed architecture is such that one wave detector complements the other. It should be noted that in charts o f Figure 8.9, the values do not represent potential detection or misses by each classifier, but rather, show which one has successfully detected an incident first.  One can roughly assume that using S W D and E W D in conjunction with each other is as i f using 3-4 times smaller spacing between stations without the drawbacks associated with it (i.e., higher cost and larger number o f false alarms due to increased number o f decisions made). If one divides the spacing between the neighboring stations by three or four, the  the value for the SWD at location "D" is not shown because it only represents one case and ceratinly is of no statistical value.  61  168  closest third or quarter of the spacing can be roughly assumed the detection domain of SWD and the rest that of E W D (as observed in Figure 8.9). This assumption leads to an expectation that the contribution of EWD be as much as 2~3 times of the SWD as is presented in Figure 8.10. This chart simply shows the percentage contributions of each classifier to the detection of incidents and summarizes much of the above discussions.  By Shock Wave Detector i 30% By Expansion Wave Detector 62%  Detectors 8%  Figure 8.10. Percentage Contribution of the Two Classifiers to Incident Detection  A similar chart in Figure 8.11 shows the contribution of each classifier to the false alarms experienced for the 54000 decisions made. The false alarms are almost shared equally by each classifier. This to a high degree is arbitrarily set by the user who selects the calibration thresholds. Since the two sets of thresholds are mainly independent of each other, it is possible to determine an approximate ratio for the number of false alarms caused by each classifier.  The actual values (averaged in a sense described for charts of Figure 8.9) are 2.07 and 1.58 minutes for SWD and EWD respectively. Same numbers if averaged independently for the incidents detected by both would be: 2.10, 1.53, and 1.98 for SWD, EWD, and both respectively. 6 2  169  Although the two numbers were selected about the same, the expected optimal ratio would depend on the sensitivity of the A D T (or DT for first few minutes) to changes in FAR of each classifier (around the desired operational values). A few experiments showed that for the values at hand the ratios close to unity were appropriate.  By Both Detectors  4%  Figure 8.11. Percentage of False Alarms caused by each Classifier  170  CHAPTER 9 CONCLUSIONS AND F U R T H E R R E S E A R C H  This chapter presents the conclusions o f the research, a number o f research suggestions to follow-up on this study and further enhancements o f the U B C A I D system  9.1 Conclusions Through a comprehensive survey o f the existing A I D methods, it was shown that despite the diversity o f the approaches taken by researchers in this area, there is a certain limit for the performance o f the developed systems. It was shown that through an effective use o f the information carried by the expansion waves, which has been overlooked by other researchers, a significant improvement in the performance o f an A I D system can be achieved. A new approach for an effective use o f this information was proposed and its feasibility was demonstrated. T o do this, two o f the existing and "in use" algorithms were also assessed in a similar way. Using the conventional methods o f presenting performance measures,  the  superiority o f the proposed system was shown.  It was also argued that the conventional methods o f presenting the performances o f A I D systems are not necessarily the best way to assess them and may lead to overestimation o f their practical effectiveness. Therefore, rather than simple tabulations o f three traditionally used performance measures, or drawings o f operating characteristic curves, new historical 171  diagrams w e r e suggested. T h e suggested diagrams p r o v i d e information that is m o r e relevant to the assessment o f various A I D methods. M o s t importantly, they better demonstrate the effectiveness and operational significance o f using A I D along w i t h the other detection means available.  T h e results o f the detailed analysis s h o w e d that although there are natural variations in the performance  of  the  U B C A I D system  with  operating  and  geometric  conditions,  the  performance is mainly consistent. It was also s h o w n that a w i d e range o f traffic v o l u m e s c a n be a c c o m m o d a t e d b y the system while maintaining an acceptable performance. F o r v o l u m e s l o w e r than 1000 V e h / h o u r / l a n e , the A D T and D R may start to deteriorate simply because the effects are often v e r y insignificant. O n the other hand, v o l u m e s above 2 1 0 0 V e h / h o u r / l a n e m a y cause t o o m a n y false alarms for the system at present stage. Therefore, these t w o values can be considered as approximate operational limits for the p r o p o s e d system at present. T h i s covers 7 0 - 9 0 % o f the daily traffic and represents virtually all o f the c o m m u t i n g traffic.  9.2 Further Research In this section, some specific areas are suggested for further research o n  enhancements,  optimization o f the p r o p o s e d A I D system and further verification o f its performance. T h e f o l l o w i n g ideas either have emerged during the course o f this study o r are steps that c o u l d be taken in the continuation o f the research.  172  While the number o f lanes in the proposed site was fixed, the applicability o f using the U B C A I D system for a more general case requires further experiments. This is because in the proposed system, the number o f inputs for each o f the two classifiers is proportional to the number o f lanes. The original idea to tackle this issue is that since the inputs represent the discrepancy from a normal condition, the inputs corresponding to the non-existing lanes could simply be assumed as unity. This however, may or may not be appropriate for the transitional segments o f the highway where much o f the merging takes place. I f the latter proves to be not effective enough, then more than one set o f networks could be used. Obviously, the analysis and assessment o f these alternatives require proper site selection and design o f the simulation models. In this study, equal spacing between the detector stations was used which may not necessarily lead to the optimum use o f the facilities. It appears that at least segments o f the freeway where there are no ramps, the spacing could be larger than in the other segments. This and other scenarios can be examined with a more detailed simulated data sets specifically designed for this purpose.  The traffic parameters in this study were divided by their expected normal values to obtain non-dimensional values. This procedure contributed significantly to the robustness o f the system and the ability to use only one set o f thresholds for the entire site and all times o f the day. T o further improve this idea, it seems reasonable to take into account the expected level o f noise at any given time and location as well. F o r example, a 30% discrepancy between the normal and  173  measured value o f a traffic parameter might be considered normal at some point or certain time, while 10% could be considered abnormal at another point or time. T o do this, one may use an estimate o f the expected noise level by calculating the standard deviation o f the readings for the last 15 minutes. The 15-minute period stated here is only a suggestion (to be consistent with the period used for estimating the normal values) and obviously, i f the idea proves to be beneficial, the proper duration for both periods can be further examined. Replacing the S W D and its part o f preprocessing stage with an existing method such as a modified form o f McMaster method can be considered. Depending on the results o f some o f above suggestions it might be possible to obtain a better performance by this replacement. In such a case the suggested three period persistence check for the McMaster method may have to be reduced. T o compensate for the higher F A R caused by this change, the calibration also needs to be changed accordingly.  174  BIBLIOGRAPHY  Ahmed, M . S., and Cook, A . R . , "Analysis o f Freeway Traffic Time-Series Data by Using Box-Jenkins Techniques", Transportation Research Record. N o . 722., (1977), pp. 1-9. Ahmed, S. A . , and Cook, A . R . , "Application o f Time-Series Analysis Techniques to Freeway Incident Detection", Transportation Research Record. N o . 841., (1982), pp. 19-21. Aultman-Hall, L . , el al, " A Catastrophe Theory Approach to Freeway Incident Detection", Applications o f advanced technologies in transportation engineering: proceedings o f the second international conference. N e w Y o r k , N . Y . , (1991), pp. 373-377. Balke, K . N . , A n Evaluation o f Existing Incident Detection Algorithms. Research report / (Texas Transportation Institute); 1232-20. (1993) Balke, K . N . , and Ullman, G . L . , Method for Selecting Among Alternative Incident Detection Strategies. Research report (Texas Transportation Institute) ;1232-12. (1993) Bell, M . G . H . , and Thancanamootoo, B . , "Automatic Incident Detection within Urban traffic Control Systems", Strafien und Verkehr 2000. Berlin, Germany, V . 4 : 2 , (1988), pp.35-39. Bielefeldt, C , "Automatic Incident Detection and Driver Warning in P O R T I C O " , Advanced Transport Telematics: Proceedings o f the Technical Days. Brussels, Volume II - Project Reports, (1993), pp. 282-286. Blissett, R. I , Stennett, C , and Day, R. M . , " N e w Techniques for Digital C C T V Processing in Automatic Traffic Monitoring", I E E E - I E E Vehicle Navigation & Information Systems Conference. Ottawa. (1993), pp. 137-140. Blosseville, J. M . , Krafft, C , Lenoir, F . , and M o t y k a V . , " T I T A N - A Traffic Measurement System Using Image Processing Techniques", I E E 2nd International Conference on Road Traffic Monitoring. London U K . (1989), pp. 84-88. Blosseville, J. M . , M o r i n , J. M . , and Lochegnies, P., " V i d e o Image Processing Application: Automatic Incident Detection on Freeways", Pacific R i m TransTech Conference. Seattle, Washington, Proceedings, V o l . 1,(1993), pp. 77-83. Bottger, R . , E i n Verfahern zur Mefitechnischen Feststellung von Verkehrosstorungen auf Frenstrafien and Autobahnen, Strafienverkehr-stechnik. Heft 6 (In German). (1979)  175  Bretherton, R . D . , " M O N I C A - System Integration for Incident - Congestion Detection and Traffic Monitoring", I E E Colloquium, n 020 . (1990) Bretherton, R. D . , and Bowen, G . T., "Incident Detection and traffic monitoring in urban areas", D R I V E Conference (Brussels, Belgium). Advanced Telematics in Road Transport. V o l . I , (1991),/?/?. 740-751. Busch, F . , Automatische Storungserkennung auf Schnell-verkehrsstrafien - ein Verfahrensvergleich.. P h D Thesis at the University o f Karlsruhe, West Germany. (In German). (1986) Busch, F . , and Fellendorf, M . , "Automatic Incident Detection on Motorways", Traffic Engineering & Control. V o l . 31, no. 4., (1990), pp. 221-227. Chang, E . C . P. and Wang, S. H , "Improved Freeway Incident Detection Using Fuzzy Set Theory', Transportation Research Board. Paper no. 940603 , Texas Transportation Institute. (1994) Chang, E . C . P., " A Neural Network Approach to Freeway Incident Detection", Vehicle Navigation and Information Systems Conference (3rd : 1992: Oslo, Norway)., (1992), pp. 641-647. Chang, G . L . , Payne, H . J., and Ritchie, S. G . , Incident Detection Issues Task A Report: Automatic Freeway Incident Detection, A State-of-the-Art Review. Draft Interim Report. Prepared for Federal Highway Administration. (1993) Chassiakos, A . P . , Spatial-Temporal Filtering and Correlation Techniques for Detection o f Incidents and Traffic Disturbances. Thesis (Ph.D.)--University o f Minnesota. (1992) Chassiakos, A . P . , and Stephanedes, Y . J., "Detection o f Incidents and Traffic Disturbances in Freeways", Pacific R i m TransTech Conference. Seattle, Washington, Proceedings, V o l . 1., (1993), pp. 407-412. Chassiakos, A . P., and Stephanedes, Y . J., "Smoothing Algorithms for Incident Detection", Transportation Research Record. N o . 1394., (1993), pp. 8-16. Chen, C . H , and Chang, G . L . , " A Dynamic Real-Time Incident Detection System for Urban Arterials System Architecture and Preliminary Results", Pacific R i m TransTech Conference. Seattle, Washington, Proceedings, V o l . 1., (1993),/?/?. 98-104. Chen, C . H . , and Chang, G . L . , " A Self-Learning System for Real-Time Incident Detection and Severity Assessment: Framework and Methodology", International Symposium on Automotive Technology & Automation (26th : 1993 : Aachen, Germany., (1993), pp. 175182.  176  Cohen, S., and Ketselidou, Z . , " A Calibration Process for Automatic Incident Detection Algorithms", International Conference on Microcomputers in Transportation (4th: 1992 : Baltimore, M d . N e w Y o r k , N . Y . : American Society o f Civil Engineers, c l 9 9 3 . , (1993), pp. 506-515. Collins, J. F . , Automatic Incident Detection - Experience with T R R L Algorithm H I O C C . TRPvL Supplementary Report 775, Transport and Road Research Laboratory, Crowthorne, Berkshire. (1983) Collins, J. F . , Hopkins, C . M . , and Martin, J. A . , Automatic Incident Detection - T R R L Algorithms H I O C C and P A T R E G . T R R L Supplementary Report 526. Transport and Road Research Laboratory, Crowthorne, Berkshire. (1979) Cook, A . R . , and Cleveland, D . E . , "Detection o f Freeway Capacity- Reducing Incidents by Traffic-Stream Measurements", Transportation Research Record. No.495., (1974), pp. 1-11. Courage, K . G , and Levin, M . , A Freeway Corridor Surveillance. Information, and Control System. Texas Transportation Institute, Texas A & M University, College Station, Research Report 488-8. (1968) Cremer, M . , "Incident Detection o f Freeways by Filtering Techniques", Preprints o f the 8th IF A C Congress. K y o t o , Japan. (1981) Dickinson, K . W . , and Wan, C . L . , "Road Traffic Monitoring using the TRTP-II System", I E E 2nd International Conference on Road Traffic Monitoring. London U K . (1989), pp.56-60. Dickinson, K . W . , and Waterfall, R. C , "Video Image Processing for Monitoring Road Traffic", JJEE International Conference on Roads Traffic Data Collection. London, (1984), pp. 105-109. Dillon, D . S., and Hall, F. L . , "Freeway Operations and the Transportation Research Record. N o . 1132., (1987), pp. 66-76.  Cusp  Catastrophe",  Dods, J. S., The Australian Road Research Board Video Based Presence Detector, IEE Conference on Road Traffic Data Collection, London, UK. (1984) Dudek, C . L . , and Messer, C . J., "Detection o f Stoppage Waves for Freeway Control", Transportation Research Record. No.469., (1973), pp. 1- 15. Dudek, C . L . , and Ullman, G . L . , Freeway Corridor Management.. N C H R P Synthesis o f Highway Practice 177. Transportation Research Board, National Research Council, Washington, D C . (1992) Dudek, C . L . , Messer, C . J., and Nuckles, N . B . , "Incident Detection on Urban Freeways", Transportation Research Record. No.495., (1974), pp. 12- 24.  177  Dunn, W . M . , and Reiss, R . A . . Freeway Operations Projects: North American Inventory Update, Prepared for Transportation Research Board, Federal Highway Administration. (1991) Fait, J. G . , Comparative Analysis o f Incident Detection Methods.. Thesis (B.S.)-California Polytechnic State University. (1994) Fambro, D . B . , and Ritch, G . P . , "Evaluation o f an Algorithm for Detecting Urban Freeway Incident During L o w - V o l u m e Conditions", Transportation Research Record. N o . 773., (1980), pp. 31-39. Federal Highway Administration, F R E S I M User Guide. Research Development and Technology Turner-Fairbank Highway Research Center, McLean, Virginia. (1994) Forbes, G . J., "Identifying Incident Congestion", I T E Journal. June (1992), pp. 17-22. Gall, A . I., and Hall, F . L . , '^Distinguishing Between Incident Congestion and Recurrent Congestion: A Proposed Logic", Transportation Research Record. N o . 1232, (1989), pp. 18. Giesa, S., and Everts, K . , " A R I A M , Car-Driver-Radio-Iirformation on the Basis o f Automatic Incident Detection", Traffic Engineering & Control. V o l . 28, no. 6., (1987), pp. 344-348. Goldberg, D . E . , Genetic Algorithms in Search Optimization & machine Learning. AddisonWesley Publishing Company, (1989) Goldblatt, R . B . , "Investigation o f the Effect o f Location o f Freeway Traffic Sensors on Incident Detection", Transportation Research Record. N o . 773, (1980), pp.24 -31. Grewal, S., "Australian Smart Highways", Sensor Review. V o l . 12, N o . 4, (1992), pp.22. Guillen, S., et al, 'Field Trials on Video Based A I D : Achievements and Evaluation Issues", Advanced Transport Telematics: Proceedings o f the Technical Days. Brussels, Volume II Project Reports, (1993), pp. 287-292. Guillen, S., et al, "Knowledge Based System for Traffic Monitoring and Incident and Congestion Detection, Using Image Processing and Computer Vision Data", 6th International Conference on Road Traffic Monitoring and Control. London: Institution o f Electrical Engineers., (1992), pp. 148-152. Hall, F . L . , and Persaud, B . N , "Evaluation o f Speed Estimates made with Single-Detector Data from Freeway Traffic Management Systems", Transportation Research Record. N o . 1232, (1989), pp. 9-16.  178  Hall, F. L . , Shi, Y . , and Atala, G . , "On-line Testing o f the M c M a s t e r Incident Detection Algorithm Under Recurrent Congestion", Transportation Research Record. N o . 1394, (1993), pp. 1-7. Hallenbeck, M . E . , Boyle, T., and Ring, J., Use o f Automatic Vehicle Identification Techniques for Measuring Traffic Performance and Performing Incident Detection. Washington State Dept. o f Transportation. (1992) Han, L . D . , and M a y , A . D . , "Artificial Intelligence Approaches for Urban Network Incident Detection and Control", Traffic control methods. N e w Y o r k , N Y : Engineering Foundation., (1990), pp. 159-176. Han, L . D . , and M a y , A . D . , "Automatic Detection o f Traffic Operational Problems on Urban Arterials", Prepared for the T R B 69th Annual Meeting in Washington D C (1990), Hobbs, A . S., and Clifford, R. J., " A U T O W A R N , a Motorway Incident Detection and Signaling System", Second International Conference on Road Traffic Monitoring. London: Institution o f Electrical Engineers., (1989), pp. 167-171. Hoose, N . , "Queue Detection using Computer Image Processing", I E E 2nd International Conference on Road Traffic Monitoring. London, U K . (1989), pp.94-98. Hoose, N . , Computer Image Processing in Traffic Engineering. Traffic Engineering series, Research Studies Press L t d . (1991) Hoose, N . , Vicencio, M . A . , and Zhang, X . , "Incident Detection in Urban Roads Using Computer Image Processing", Traffic Engineering & Control. V o l . 33, no. 4., (1992), pp. 236-244. Hsiao, C . H . , L i n , C . T., and Cassidy, M . , "Application o f Fuzzy L o g i c and Neural Networks to Automatically Detect Freeway Traffic Incidents", Journal o f Transportation Engineering. V o l . 120, N o . 5, (1994), pp. 753-772. Ivan, J. N . , et al, "Arterial Street Incident Detection Using Multiple Data Sources: Plans for A D V A N C E " , Pacific R i m TransTech Conference, Seattle, Washington, Proceedings, V o l . 1., (1993), pp. 429-435. Keen, K . , and Hoose, N . , "INVATD-Integration o f Computer Vision Techniques for Automatic Incident Detection", I E E Colloquium on the Car and its Environment - What D R I V E and P R O M E T H E U S have to offer. Digest N o . 20. (1990) Korpal, P. R., "Incident Management: K e y to Successful Traffic Management in Toronto", I T E Journal. M a r c h 1992., (1992), pp. 58-61.  179  Kuhne, R. D . , "Macroscopic Freeway M o d e l for Dense Traffic: Stop-Start Waves and Incident Detection", International Symposium on Transportation and traffic Theory (9th Delft, Netherlands)., (1984), pp. 21-42. Kuhne, R. D . , "Freeway Control and Incident Detection Using a Stochastic Continuum Theory o f Traffic Flow", International Conference on Applications o f Advanced Technologies in Transportation Engineering (1st: 1989: San Diego, Calif), (1989), pp. 287-292. Kuhne, R. D . , and Immes S., "Freeway Control Systems for Using Section-Related Traffic Variable Detection", Pacific R i m TransTech Conference. Seattle, Washington, Proceedings, V o l . 1, (1993), pp. 56-62. Levin, M . , and Krause, G . M . , "Incident Detection: a Bayesian Approach", Transportation Research Record. No.682. (1978), pp. 52-58. Levin, M . , and Krause, G . M . , "Incident Detection Algorithms Part.l Off-Line Evaluation, Part. 2, On-Line Evaluation". Transportation Research Record. N o . 722, (1979), pp. 49-64. L i n , W . H . , Incident Detection with Data from L o o p Surveillance Systems: The Role o f Wave Analysis. Ph.D. thesis, University o f California at Berkeley. (1995) Lindley, J. A . , Quantification o f Urban Freeway Congestion and Analysis o f Remedial Measures. Report F H W A / R D - 8 7 / 0 5 2 , Federal Highway Administration, Washington, D C (1986) Marsden, B . G . , Wall, H . B . , and Hunt, J., "Intelligent Data for Incident Detection". Vehicle systems for roads. Warrendale, P A : Society o f Automotive Engineers., (1993), pp. 75-90. Masters, P. H , L a m , J. K . , and Wong, K . , "Incident Detection Algorithms for C O M P A S S , an Advanced Traffic Management System", Vehicle Navigation and Information Systems Conference (2nd : 1991: Dearborn, Mich.)., (1991), pp. 295-310. M a y , A . D . , Traffic F l o w Fundamentals. Prentice Hall, Englewood, N e w Jersey. (1990) Messer, C . J., Dudek, C . L . , and Friebele, J. D . , "Method for Predicting Travel Time and Other Operational Measures in Real-Time During Freeway Incident Conditions", Highway Research Record. N o . 461, (1973), pp. 1-16. Michalopoulos, P. G . , "Vehicle Detection Through Video Image Processing: The Autoscope System", I E E E Transactions on Vehicular Technology V o l . 40, N o . 1, (1991), pp. 21 - 29. Michalopoulos, P. G . , et al, "Automatic Incident Detection Through Video Processing", Traffic engineering & control. V o l . 34, no. 2., (1993), pp. 66-75.  Image  180  Michalopoulos, P. G , Jacobson, R. D . and Anderson, C , A . , "Field Implementation and Testing o f a Machine Vision Based Incident Detection System". Pacific R i m TransTech Conference. Seattle, Washington, Proceedings, V o l . 1., (1993), pp. 69-76. Morello, E . , and Sala, G . , "Automatic Incident Detection in H E R M E S " , Advanced Transport Telematics: Proceedings o f the Technical Days. Brussels. Volume II - Project Reports, (1993), pp. 293-298. Navin, F . , "Traffic Congestion Catastrophes", Transportation Planning and Technology. Volume 11, (1986), pp. 19-25. Parkany, A . E . , and Bernstein, D . , "Using V R C Data for Incident Detection", Pacific R i m TransTech Conference. Seattle, Washington, Proceedings, V o l . 1., (1993),pp. 63-68. Payne, H . J., Helfenbein, E . D . , and Knobel, H . C , Development and Testing o f Incident Detection Algorithms. Final Report.. Report N o : FH-11-8278, Federal Highway Administration, U . S . Department o f Transportation, Washington D C , V o l . 2. (1976) Payne, H . J., and Knobel, H . C , Development and Testing o f Incident Detection Algorithms. V o l . 3 . User Guidelines. Prepared for Federal Highway Administration, U . S . Department o f Transportation, Washington D C , Report N o : F H W A - R D - 7 6 - 2 1 . (1976), Payne, H . J., and Tignor, S C . , "Freeway Incident Detection Algorithms Based on Decision Trees with States", Transportation Research Record. N o . 682., (1978), pp. 30-37. Payne, H . J., Analysis o f Incident Detection and Incident Management Practices: A Working Paper, Prepared for Federal Highway Administration. (1993) Persaud, B . N . , and Hall, F. L . , "Catastrophe Theory and Patterns in 30-Second Freeway Traffic Data Implications for Incident Detection", Transportation Research. Part A . General. V o l . 23A, no. 2., (1989), pp. 103-113. Persaud, B . N . , Hall, F. L . , and Hall, L . M . , "Congestion Identification Aspects o f the McMaster Incident Detection Algorithm", Transportation Research Record. N o . 1287., (1990), pp. 167-175. Razavi, A . , A Survey o f Automatic Incident Detection Systems. Prepared for: Province o f British Columbia Ministry o f Transportation and Highways, Victoria, B . C . (1995) Ritchie, S. G . , and Cheu, R. L . , "Simulation o f Freeway Incident Detection Using Artificial Neural Networks", Transportation Research. Part C . Emerging technologies. V o l . 1C, no. 3 (Sept. 1993), (1993), pp. 203-217. Ritchie, S. G . , Cheu, R. L . , and Recker, W . W . , "Freeway Incident Detection Using Artificial Neural Networks", International Conference on Artificial Intelligence Applications in  181  Transportation Engineering (Ventura. Calif). Conference preprints. Irvine, Calif: Institute o f Transportation Studies, University o f California, Irvine., (1992), pp. 215-234. Roe, H . , 'The Use o f Microwaves in Europe to Detect, Classify and Communicate with Vehicles", I E E E M T T - S International Microwave Symposium Digest. V o l . 3., (1991), pp. 1143 -1146. Roper, D . H . , "Freeway Incident Management", N C H R P Synthesis o f Highway Practice 156., Transportation Research Board, National Research Council, Washington D C (1990) Rose, G . , and D i a , H , "Freeway Automatic Incident Detection using Artificial Neural Networks", Proceedings o f the International Conference on Application o f N e w technology to Transport Systems. Melbourne. Australia, V o l . 1 , (1995), pp. 123-140. Rosenblatt, F . , "The Perceptron: " A Probabilistic M o d e l for Information Storage and Organization in the Brain", Psychological Review N o . 65., (1958), pp. 843-852. Rourke, A . , and Bell, M . G . H , "Traffic Analysis using L o w Cost Image Processing", Proceedings o f Seminar. D-Transportation Planning Methods. P T R C Summer Annual Meeting, Bath, U K . (1988) Rumelhart D . E . , Hinton G . E . , and Williams, R. J., "Learning Internal representations by error propagation", Parallel and Distributed Processing, edited by Rumelhart, D . E . , McClelland, J.L., and the P D P Research Group, V o l . 1 , M I T Press, (1986), pp. 318-362. Russam, K . , "Motorway Signals and the Detection o f Incidents", Transportation Planning and Technology. Volume 9, (1984), pp. 99-108. Sin, F. Y . C , and Snell, A . , "Implementation o f Automatic Incident Detection Systems on the Inner Metropolitan Freeways In Melbourne", Proceedings o f The Seventh Conference o f the Road Engineering Association o f Asia and Australia. Singapore, V o l . 1, (1992), pp. 337-346. Sin, F. Y . C , " M o v i n g Towards Total Management o f Melbourne metropolitan Traffic Network - A Technical Perspective", Proceedings o f the 16th Australian Road Research Board Conference part 5.. (1992). pp. 197-212. Snell, A . , Sin, F. Y . C , and L u k , J. Y . K . , 'Treeway Incident Management in Melbourne: A n Initial Appraisal", Proceedings. 16th A R R B Conference. Perth, Western Australia, V o l . 16, part 5, (1992), pp. 301-313. Stephanedes, Y . J., and Chassiakos, A . P., "Application o f Filtering Techniques for Incident Detection", Journal o f Transportation Engineering. V o l . 119, no. 1., (1993), pp. 13-26.  182  Stephanedes, Y . J., and Chassiakos, A . P., "Freeway Incident Detection Through Filtering", Transportation Research. Part C . Emerging technologies. V o l . 1C, no. 3, (1993), pp. 219233. Stephanedes, Y . J., Chassiakos, A . P., and Michalopoulos, P. G . , "Comparative Performance evaluation o f Incident Detection Algorithms", Transportation Research Record. N o . 1360., (1992), pp. 50-57. Takaba, S., Ooyama, N . , "Traffic F l o w Measuring System with Image Sensors", O E C D Symposium on Road Research Program. Tokyo, Japan, (1984), pp. 2-20. Takaba, S., Sekene, T., H w a g , B . W . , " A Traffic F l o w Measuring System using a Solid State Sensor", I E E Conference on Road Traffic Data Collection. London, U K (1984). Theuwissen, A . , V i t z , A . , and Vermieren J., "Analysis o f Traffic F l o w with a C C D Camera and a Microprocessor", P T R C Summer Conference Seminar K , Brighton U K . (1980) Tignor, S. C , and Payne, H . J., (1977), Improved Freeway incident Detection Algorithms., Public Roads, Vol. 41, No. 1, June 1977., pp. 32 -40. Trigg, D . W . , and Leach, A . G , "Exponential Smoothing with an Adaptive Response Rate", Operations Research Quarterly. V o l . 18, N o . 1, (1967), pp. 53-59. Tsai, J., and Case, E . R , "Development o f Freeway Incident Detection Algorithms by Using Pattern Recognition Techniques", Transportation Research Record. N o . 722, (1979), pp. 113 -116. Versavel, J., Lemair, F., and Van-der-stede, D . , "Camera and Computer-Aided Traffic Sensor", I E E 2nd International Conference on Road Traffic Monitoring. London, U K . (1989), pp. 66-70. Waterfall, R. C , and Dickinson, K . W . , "Image Processing Applied to Traffic - Practical Experience", Traffic Engineering and Control V o l . 25, no. 2., (1984), pp. 60-67. Whitson, R. H . , et al, "Real-Time Evaluation o f Freeway Quality o f Traffic Service", Highway Research Report N o . 289., (1969), pp. 38-50. Willski, A . S., et al, "Dynamic Model-Based Techniques for the Detection o f Incidents on Freeways", I E E E Transactions on Automatic Control. V o l . A C - 2 5 , N o . 3., (1980), pp. 347360. Yagoda, H . N . , and Buchanan, J. R., " A N e w Technique for Incident Detection in Light Traffic", Institute o f Transportation Engineers. Meeting. Compendium o f technical papers. 61st, (1991), pp. 523-529.  183  Yagoda, R. N . , "Choosing the Right Incident Logic in I V H S : False Alarm Rates vs. Speed o f Detection", Institute o f Transportation Engineers. Meeting. Compendium o f technical papers. 61st, (1991), pp. 447-456. Zadeh, L . A . , "Fuzzy Sets". Information and Control. N o . 8, (1965), pp. 338-353.  184  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items