Assessing Highway Travel Time Reliability using Probe Vehicle Data

Author(s):  
Piotr Olszewski ◽  
Tomasz Dybicz ◽  
Kazimierz Jamroz ◽  
Wojciech Kustra ◽  
Aleksandra Romanowska

Probe vehicle data (also known as “floating car data”) can be used to analyze travel time reliability of an existing road corridor in order to determine where, when, and how often traffic congestion occurs at particular road segments. The aim of the study is to find the best reliability performance measures for assessing congestion frequency and severity based on probe data. Pilot surveys conducted on A2 motorway in Poland confirm the usefulness and reasonable accuracy of probe data for measuring speed variation in both congested and free-flowing traffic. Historical probe vehicle data and traditional traffic counts from Polish S6 expressway were used to analyze travel time reliability on its 24 road sections. Travel time indexes and reliability ratings for the whole year 2016 were calculated to identify segments with lower reliability and higher expected delay. It is concluded that unlike the HCM-6 method, travel times obtained from probe data should be averaged in 1-hour intervals. Delay index is proposed as a new reliability indicator for road segments. Delay map diagrams are recommended for showing how the congestion spots move in space and with time of day.

Author(s):  
Markus Steinmaßl ◽  
Stefan Kranzinger ◽  
Karl Rehrl

Travel time reliability (TTR) indices have gained considerable attention for evaluating the quality of traffic infrastructure. Whereas TTR measures have been widely explored using data from stationary sensors with high penetration rates, there is a lack of research on calculating TTR from mobile sensors such as probe vehicle data (PVD) which is characterized by low penetration rates. PVD is a relevant data source for analyzing non-highway routes, as they are often not sufficiently covered by stationary sensors. The paper presents a methodology for analyzing TTR on (sub-)urban and rural routes with sparse PVD as the only data source that could be used by road authorities or traffic planners. Especially in the case of sparse data, spatial and temporal aggregations could have great impact, which are investigated on two levels: first, the width of time of day (TOD) intervals and second, the length of road segments. The spatial and temporal aggregation effects on travel time index (TTI) as prominent TTR measure are analyzed within an exemplary case study including three different routes. TTI patterns are calculated from data of one year grouped by different days-of-week (DOW) groups and the TOD. The case study shows that using well-chosen temporal and spatial aggregations, even with sparse PVD, an in-depth analysis of traffic patterns is possible.


Author(s):  
Stefan Kranzinger ◽  
Markus Steinmaßl

Aggregation of sparse probe vehicle data (PVD) is a crucial issue in travel time reliability (TTR) analysis. This study, therefore, examines the effect of temporal and spatial aggregation of sparse PVD on the results of a linear regression analysis where two different measures of TTR are analyzed as the dependent variable. Our results show that by aggregating the data to longer time intervals and coarser spatial units the linear model can explain a higher proportion of the variance in TTR. Furthermore, we find that the effects of road design characteristics in particular depend on the variable used to represent TTR. We conclude that the temporal and spatial aggregation of sparse PVD affects the results of linear regression explaining TTR.


Author(s):  
Xiaoxiao Zhang ◽  
Mo Zhao ◽  
Justice Appiah ◽  
Michael D. Fontaine

Travel time reliability quantifies variability in travel times and has become a critical aspect for evaluating transportation network performance. The empirical travel time cumulative distribution function (CDF) has been used as a tool to preserve inherent information on the variability and distribution of travel times. With advances in data collection technology, probe vehicle data has been frequently used to measure highway system performance. One challenge with using CDFs when handling large amounts of probe vehicle data is deciding how many different CDFs are necessary to fully characterize experienced travel times. This paper explores statistical methods for clustering CDFs of travel times at segment level into an optimal number of homogeneous clusters that retain all relevant distributional information. Two clustering methods were tested, one based on classic hierarchical clustering and the other used model-based functional data clustering, to find out their performance on clustering distributions using travel time data from Interstate 64 in Virginia. Freeway segments and those within interchange areas were clustered separately. To find the proper data format as clustering input, both scaled and original travel times were considered. In addition, a non-data-driven method based on geometric features was included for comparison. The results showed that for freeway segments, clustering using travel times and the Anderson–Darling dissimilarity matrix and Ward’s linkage had the best performance. For interchange segments, model-based clustering provided the best clusters. By clustering segments into homogenous groups, the results of this study could improve the efficiency of further travel time reliability modeling.


Author(s):  
Howell Li ◽  
Jamie Mackey ◽  
Matt Luker ◽  
Mark Taylor ◽  
Darcy M. Bullock

Second-by-second GPS trajectories, called trip traces, of vehicles moving along an arterial provide the highest fidelity measure of corridor operations. However, large samples of such contiguous trajectories are not always possible because of varying techniques to reset probe vehicle IDs for data privacy, varying probe data penetration rates, and varying vehicle routing. This paper analyzes changes in segment travel time using the Mann–Whitney U test and proposes a method for creating a composite travel time metric using trip trace data. These techniques were applied to a four-corridor signal improvement and upgrade project in southeastern Salt Lake County. The study found that on average three out of the four corridors decreased in composite median travel time, by 32 s, 16 s, and 14 s. Interquartile range (IQR) was used to assess travel time reliability and the IQR travel time reduced (improved) on average by 33 s, 23 s, 18 s, and 1 s. In addition, a rank-sums method for statistically comparing the two composite travel time distributions is applied to the results. The four corridors had a total of 48 links and were evaluated during five time-of-day periods. Out of the 240 link-periods, the rank-sums analysis method found that overall, 68 link-periods improved and 13 link-periods slowed, at a 95% significance level. The annualized user benefit from the improvements was estimated at $2.2 million for the four corridors.


Author(s):  
Nabaruna Karmakar ◽  
Seyedbehzad Aghdashi ◽  
Nagui M. Rouphail ◽  
Billy M. Williams

Traffic congestion costs drivers an average of $1,200 a year in wasted fuel and time, with most travelers becoming less tolerant of unexpected delays. Substantial efforts have been made to account for the impact of non-recurring sources of congestion on travel time reliability. The 6th edition of the Highway Capacity Manual (HCM) provides a structured guidance on a step-by-step analysis to estimate reliability performance measures on freeway facilities. However, practical implementation of these methods poses its own challenges. Performing these analyses requires assimilation of data scattered in different platforms, and this assimilation is complicated further by the fact that data and data platforms differ from state to state. This paper focuses on practical calibration and validation methods of the core and reliability analyses described in the HCM. The main objective is to provide HCM users with guidance on collecting data for freeway reliability analysis as well as validating the reliability performance measures predictions of the HCM methodology. A real-world case study on three routes on Interstate 40 in the Raleigh-Durham area in North Carolina is used to describe the steps required for conducting this analysis. The travel time index (TTI) distribution, reported by the HCM models, was found to match those from probe-based travel time data closely up to the 80th percentile values. However, because of a mismatch between the actual and HCM estimated incident allocation patterns both spatially and temporally, and the fact that traffic demands in the HCM methods are by default insensitive to the occurrence of major incidents, the HCM approach tended to generate larger travel time values in the upper regions of the travel time distribution.


Author(s):  
Thomas M. Brennan ◽  
Stephen M. Remias ◽  
Lucas Manili

Anonymous probe vehicle data are being collected on roadways throughout the United States. These data are incorporated into local and statewide mobility reports to measure the performance of highways and arterial systems. Predefined spatially located segments, known as traffic message channels (TMCs), are spatially and temporally joined with probe vehicle speed data. Through the analysis of these data, transportation agencies have been developing agencywide travel time performance measures. One widely accepted performance measure is travel time reliability, which is calculated along a series of TMCs. When reliable travel times are not achieved because of incidents and recurring congestion, it is desirable to understand the time and the location of these occurrences so that the corridor can be proactively managed. This research emphasizes a visually intuitive methodology that aggregates a series of TMC segments based on a cursory review of congestion hotspots within a corridor. Instead of a fixed congestion speed threshold, each TMC is assigned a congestion threshold based on the 70th percentile of the 15-min average speeds between 02:00 and 06:00. An analysis of approximately 90 million speed records collected in 2013 along I-80 in northern New Jersey was performed for this project. Travel time inflation, the time exceeding the expected travel time at 70% of measured free-flow speed, was used to evaluate each of the 166 directional TMC segments along 70 mi of I-80. This performance measure accounts for speed variability caused by roadway geometry and other Highway Capacity Manual speed-reducing friction factors associated with each TMC.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yajie Zou ◽  
Ting Zhu ◽  
Yifan Xie ◽  
Linbo Li ◽  
Ying Chen

Travel time reliability (TTR) is widely used to evaluate transportation system performance. Adverse weather condition is an important factor for affecting TTR, which can cause traffic congestions and crashes. Considering the traffic characteristics under different traffic conditions, it is necessary to explore the impact of adverse weather on TTR under different conditions. This study conducted an empirical travel time analysis using traffic data and weather data collected on Yanan corridor in Shanghai. The travel time distributions were analysed under different roadway types, weather, and time of day. Four typical scenarios (i.e., peak hours and off-peak hours on elevated expressway, peak hours and off-peak hours on arterial road) were considered in the TTR analysis. Four measures were calculated to evaluate the impact of adverse weather on TTR. The results indicated that the lognormal distribution is preferred for describing the travel time data. Compared with off-peak hours, the impact of adverse weather is more significant for peak hours. The travel time variability, buffer time index, misery index, and frequency of congestion increased by an average of 29%, 19%, 22%, and 63%, respectively, under the adverse weather condition. The findings in this study are useful for transportation management agencies to design traffic control strategies when adverse weather occurs.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Mahmuda Akhtar ◽  
Sara Moridpour

In recent years, traffic congestion prediction has led to a growing research area, especially of machine learning of artificial intelligence (AI). With the introduction of big data by stationary sensors or probe vehicle data and the development of new AI models in the last few decades, this research area has expanded extensively. Traffic congestion prediction, especially short-term traffic congestion prediction is made by evaluating different traffic parameters. Most of the researches focus on historical data in forecasting traffic congestion. However, a few articles made real-time traffic congestion prediction. This paper systematically summarises the existing research conducted by applying the various methodologies of AI, notably different machine learning models. The paper accumulates the models under respective branches of AI, and the strength and weaknesses of the models are summarised.


Sign in / Sign up

Export Citation Format

Share Document