Performance Measures to Characterize Corridor Travel Time Delay Based on Probe Vehicle Data

Author(s):  
Thomas M. Brennan ◽  
Stephen M. Remias ◽  
Lucas Manili

Anonymous probe vehicle data are being collected on roadways throughout the United States. These data are incorporated into local and statewide mobility reports to measure the performance of highways and arterial systems. Predefined spatially located segments, known as traffic message channels (TMCs), are spatially and temporally joined with probe vehicle speed data. Through the analysis of these data, transportation agencies have been developing agencywide travel time performance measures. One widely accepted performance measure is travel time reliability, which is calculated along a series of TMCs. When reliable travel times are not achieved because of incidents and recurring congestion, it is desirable to understand the time and the location of these occurrences so that the corridor can be proactively managed. This research emphasizes a visually intuitive methodology that aggregates a series of TMC segments based on a cursory review of congestion hotspots within a corridor. Instead of a fixed congestion speed threshold, each TMC is assigned a congestion threshold based on the 70th percentile of the 15-min average speeds between 02:00 and 06:00. An analysis of approximately 90 million speed records collected in 2013 along I-80 in northern New Jersey was performed for this project. Travel time inflation, the time exceeding the expected travel time at 70% of measured free-flow speed, was used to evaluate each of the 166 directional TMC segments along 70 mi of I-80. This performance measure accounts for speed variability caused by roadway geometry and other Highway Capacity Manual speed-reducing friction factors associated with each TMC.

Author(s):  
Thomas M. Brennan ◽  
Mohan M. Venigalla ◽  
Ashley Hyde ◽  
Anthony LaRegina

Probe vehicle speed data has become an important data source for evaluating the congestion performance of highways and arterial roads. Pre-defined spatially located segments known as traffic message channels (TMCs) are linked to commercially available, temporal anonymous probe vehicle speed data. These data have been used to develop agency-wide performance measures to better plan and manage infrastructure assets. Recent research has analyzed individual as well as aggregated TMC links on roadway systems to identify congested areas along spatially defined routes. By understanding the typical congestion of all TMCs in a region as indicated by increased travel times, a broader perspective of the congestion characteristics can be gained. This is especially important when determining the impact of such occurrences in the region as a major crash event, special events, or during extreme conditions such as a natural or human-made disaster. This paper demonstrates how aggregated probe speed data can be used to characterize regional congestion. To demonstrate the methodology, an analysis of vehicle speed data during Hurricane Sandy, the second costliest hurricane in the United States, is used to show the regional impact in 2012. Further, the analysis results are compared and contrasted with comparable periods of increased congestion in 2013, 2014, and 2016. The analysis encompasses 614 TMCs, within 10 miles of the New Jersey coast. Approximately 90 million speed records covering five counties are analyzed in the study.


Author(s):  
Sakib Mahmud Khan ◽  
Anthony David Patire

Transportation agencies monitor freeway performance using various measures such as VMT (vehicle-miles traveled), VHD (vehicle-hours of delay), and VHT (vehicle-hours traveled). They typically rely on data from point detectors to estimate these freeway performance measures. Point detectors such as inductive loops cannot capture the travel time for a corridor, leading to inaccurate performance measure estimation. This research develops a hybrid method, which estimates freeway performance measures using a mix of probe vehicle data provided by third-party vendors and data from traditional point detectors. Using a simulated model of a freeway (Interstate-210), the overall framework using multiple data sources is evaluated and compared with the traditional point detector-based estimation method. In the traditional method, point speeds are estimated with the flow and occupancy values using g-factors. Data from 5% of the total vehicles are used to generate the third-party provided travel time data. The analysis is conducted for multiple scenarios, including peak and off-peak periods. Results suggest that fusing probe vehicle data from third-party vendors with data from point detectors can help transportation agencies estimate performance measures better than the traditional method, in scenarios that have noticeable traffic demand on freeways.


Author(s):  
Markus Steinmaßl ◽  
Stefan Kranzinger ◽  
Karl Rehrl

Travel time reliability (TTR) indices have gained considerable attention for evaluating the quality of traffic infrastructure. Whereas TTR measures have been widely explored using data from stationary sensors with high penetration rates, there is a lack of research on calculating TTR from mobile sensors such as probe vehicle data (PVD) which is characterized by low penetration rates. PVD is a relevant data source for analyzing non-highway routes, as they are often not sufficiently covered by stationary sensors. The paper presents a methodology for analyzing TTR on (sub-)urban and rural routes with sparse PVD as the only data source that could be used by road authorities or traffic planners. Especially in the case of sparse data, spatial and temporal aggregations could have great impact, which are investigated on two levels: first, the width of time of day (TOD) intervals and second, the length of road segments. The spatial and temporal aggregation effects on travel time index (TTI) as prominent TTR measure are analyzed within an exemplary case study including three different routes. TTI patterns are calculated from data of one year grouped by different days-of-week (DOW) groups and the TOD. The case study shows that using well-chosen temporal and spatial aggregations, even with sparse PVD, an in-depth analysis of traffic patterns is possible.


Author(s):  
Stefan Kranzinger ◽  
Markus Steinmaßl

Aggregation of sparse probe vehicle data (PVD) is a crucial issue in travel time reliability (TTR) analysis. This study, therefore, examines the effect of temporal and spatial aggregation of sparse PVD on the results of a linear regression analysis where two different measures of TTR are analyzed as the dependent variable. Our results show that by aggregating the data to longer time intervals and coarser spatial units the linear model can explain a higher proportion of the variance in TTR. Furthermore, we find that the effects of road design characteristics in particular depend on the variable used to represent TTR. We conclude that the temporal and spatial aggregation of sparse PVD affects the results of linear regression explaining TTR.


Author(s):  
Piotr Olszewski ◽  
Tomasz Dybicz ◽  
Kazimierz Jamroz ◽  
Wojciech Kustra ◽  
Aleksandra Romanowska

Probe vehicle data (also known as “floating car data”) can be used to analyze travel time reliability of an existing road corridor in order to determine where, when, and how often traffic congestion occurs at particular road segments. The aim of the study is to find the best reliability performance measures for assessing congestion frequency and severity based on probe data. Pilot surveys conducted on A2 motorway in Poland confirm the usefulness and reasonable accuracy of probe data for measuring speed variation in both congested and free-flowing traffic. Historical probe vehicle data and traditional traffic counts from Polish S6 expressway were used to analyze travel time reliability on its 24 road sections. Travel time indexes and reliability ratings for the whole year 2016 were calculated to identify segments with lower reliability and higher expected delay. It is concluded that unlike the HCM-6 method, travel times obtained from probe data should be averaged in 1-hour intervals. Delay index is proposed as a new reliability indicator for road segments. Delay map diagrams are recommended for showing how the congestion spots move in space and with time of day.


Author(s):  
Wei Sun ◽  
Scott S. Washburn

Applications in the field often require analysis of freeway travel time reliability (TTR) at the network level. While micro-simulation is suitable for performing network-level analysis, the computational burden can become unreasonable when TTR analysis is factored in. As for macro-simulation, most network analysis studies often use simplified link performance functions to represent the travel time and flow relationship. Such functions are not generally sensitive to the range of geometric and traffic conditions that can influence freeway facility operations. This research extends the Highway Capacity Manual (HCM) freeway TTR analysis methodology to the network level. The proposed freeway network TTR analysis methodology generates scenarios that represent the impacts of origin–destination (OD) demand variations, weather events, incident events, and work zone events on freeway network travel time. For each scenario, the methodology performs user equilibrium (UE) traffic assignment and the HCM freeway facility core methodology is applied to represent the travel time and flow relationship. The method of successive average approach is applied to solve the UE traffic assignment. Finally, scenario travel times (and/or other performance measures) are aggregated into various distributions of interest, such as the network-, facility-, and OD-level distributions, and TTR performance measures are calculated at the three different levels. A software tool is developed using C# language on the .NET Framework. The software tool provides a convenient and efficient approach for transportation planners and researchers to conduct the freeway network TTR analysis, which helps to bridge the gap between research and practice.


Author(s):  
Xiaoxiao Zhang ◽  
Mo Zhao ◽  
Justice Appiah ◽  
Michael D. Fontaine

Travel time reliability quantifies variability in travel times and has become a critical aspect for evaluating transportation network performance. The empirical travel time cumulative distribution function (CDF) has been used as a tool to preserve inherent information on the variability and distribution of travel times. With advances in data collection technology, probe vehicle data has been frequently used to measure highway system performance. One challenge with using CDFs when handling large amounts of probe vehicle data is deciding how many different CDFs are necessary to fully characterize experienced travel times. This paper explores statistical methods for clustering CDFs of travel times at segment level into an optimal number of homogeneous clusters that retain all relevant distributional information. Two clustering methods were tested, one based on classic hierarchical clustering and the other used model-based functional data clustering, to find out their performance on clustering distributions using travel time data from Interstate 64 in Virginia. Freeway segments and those within interchange areas were clustered separately. To find the proper data format as clustering input, both scaled and original travel times were considered. In addition, a non-data-driven method based on geometric features was included for comparison. The results showed that for freeway segments, clustering using travel times and the Anderson–Darling dissimilarity matrix and Ward’s linkage had the best performance. For interchange segments, model-based clustering provided the best clusters. By clustering segments into homogenous groups, the results of this study could improve the efficiency of further travel time reliability modeling.


Author(s):  
Nabaruna Karmakar ◽  
Seyedbehzad Aghdashi ◽  
Nagui M. Rouphail ◽  
Billy M. Williams

Traffic congestion costs drivers an average of $1,200 a year in wasted fuel and time, with most travelers becoming less tolerant of unexpected delays. Substantial efforts have been made to account for the impact of non-recurring sources of congestion on travel time reliability. The 6th edition of the Highway Capacity Manual (HCM) provides a structured guidance on a step-by-step analysis to estimate reliability performance measures on freeway facilities. However, practical implementation of these methods poses its own challenges. Performing these analyses requires assimilation of data scattered in different platforms, and this assimilation is complicated further by the fact that data and data platforms differ from state to state. This paper focuses on practical calibration and validation methods of the core and reliability analyses described in the HCM. The main objective is to provide HCM users with guidance on collecting data for freeway reliability analysis as well as validating the reliability performance measures predictions of the HCM methodology. A real-world case study on three routes on Interstate 40 in the Raleigh-Durham area in North Carolina is used to describe the steps required for conducting this analysis. The travel time index (TTI) distribution, reported by the HCM models, was found to match those from probe-based travel time data closely up to the 80th percentile values. However, because of a mismatch between the actual and HCM estimated incident allocation patterns both spatially and temporally, and the fact that traffic demands in the HCM methods are by default insensitive to the occurrence of major incidents, the HCM approach tended to generate larger travel time values in the upper regions of the travel time distribution.


2017 ◽  
Vol 2615 (1) ◽  
pp. 105-112 ◽  
Author(s):  
Nagui M. Rouphail ◽  
SangKey Kim ◽  
Seyedbehzad Aghdashi

The use of probe vehicle data for highway performance monitoring is increasingly being adopted in many countries. In the United States, third-party data provider entities such as Google, INRIX, HERE, and TomTom are delivering products to state and local transportation agencies that are enabling them to identify bottlenecks, incidents, and other key operational events on the basis of probe vehicle speed and travel time. However, the capacity analysis methods in the U.S. Highway Capacity Manual continue, for the most part, to rely on the analyst’s ability to gather data at fixed points, whether manually or from fixed point sensors. This paper explores the use of intelligence to drive (i2D) high-resolution vehicle data to assess several research questions related to free-flow speed (FFS) estimation, a key parameter in freeway segment analyses. On the basis of 1 year of high-resolution data collected from a local fleet of about 20 vehicles driven by volunteer drivers, researchers accumulated more than 20 million s of driving, which when filtered were used to evaluate research questions and develop enhanced predictive models for FFS. Speed limit and section ramp density (i.e., those ramps within the segment proper only) were found to have had a strong effect on the value of FFS. Driver familiarity was found to have an effect also, although this effect was not conclusive across 10 study sites. Finally, an FFS predictive model that incorporates speed limit and section ramp density was found to fit the high-resolution data quite well, generating an absolute error of only 1.3% across all sites. That finding compares with an error of 6.6% with the current Highway Capacity Manual 2010 model predictions.


Sign in / Sign up

Export Citation Format

Share Document