Highway Safety Manual Calibration: Estimating the Minimum Required Sample Size

Author(s):  
Mahdi Rajabi ◽  
Patrick Gerard ◽  
Jennifer Ogle

Crash frequency has been identified by many experts as one of the most important safety measures, and the Highway Safety Manual (HSM) encompasses the most commonly accepted predictive models for predicting the crash frequency on specific road segments and intersections. The HSM recommends that the models be calibrated using data from a jurisdiction where the models will be applied. One of the most common start-up issues with the calibration process is how to estimate the required sample size to achieve a specific level of precision, which can be a function of the variance of the calibration factor. The published research has indicated great variance in sample size requirements, and some of the sample size requirements are so large that they may deter state departments of transportation (DOT) from conducting calibration studies. In this study, an equation is derived to estimate the sample size based on the coefficient of variation of the calibration factor and the coefficient of variation of the observed crashes. Using this equation, a framework is proposed for state and local agencies to estimate the required sample size for calibration based on their desired level of precision. Using two recent calibration studies, South Carolina and North Carolina, it is shown that the proposed framework leads to more accurate estimates of sample size compared with current HSM recommendations. Whereas the minimum sample size requirement published in the HSM is based on the summation of the observed crashes, this paper demonstrates that the summation of the observed crashes may result in calibration factors that are less likely to be equally precise and the coefficient of the variation of the observed crashes can be considered instead.

Author(s):  
Mahdi Rajabi ◽  
Jennifer Harper Ogle ◽  
Patrick Gerard

The publication of the Highway Safety Manual (HSM) in 2010 established crash frequency prediction as the essential safety measure for safety studies. However, given that the models were developed using a single state’s data, the HSM recommends calibration of the prediction models using data from the jurisdiction where they will be applied. This calibration process has been conducted in several states and many questions have been raised as a result. This paper is intended to investigate different definitions and criteria for the calibration factors, and provide recommendations for practitioners on which definition to use. In addition to the calibration factors in the HSM and previously published definitions, two other calibration factor equations are proposed and compared using multiple goodness of fit measures. Whereas each definition may outperform others in certain measures, in this study, it is recommended to use the definition that maximizes the likelihood between predicted and observed crashes. The idea is to follow the same concept in both state-specific safety performance functions development and calibration process, which is maximizing the likelihood of predicted and observed crashes.


Author(s):  
Afshin Famili ◽  
Wayne Sarasua ◽  
Adika Mammadrahimli Iqbal ◽  
Devesh Kumar ◽  
Jennifer Harper Ogle

The AASHTO Highway Safety Manual (HSM) presents a variety of methods for quantitatively estimating crash frequency or severity at a variety of locations. The HSM predictive methods require the roadway network to be divided into homogeneous segments and intersections, or sites populated with a series of attributes. It recommends a minimum segment length of 0.1 mi. This research focuses on segment lengths of less than 0.1 mi for statewide screening of midblock crash locations to identify site specific locations with high crash incidence. The paper makes an argument that many midblock crashes can be concentrated along a very short segment because of an undesirable characteristic of a specific site. The use of longer segments may “hide” the severity of a single location if the rest of the segment has few or no additional crashes. In actuality, this research does not divide sections of roads into short segments. Instead, a short-window approach is used. The underlying road network is used to create a layer of segment polygons using GIS buffering. Crash data are then overlaid and aggregated to the segment polygons for further analysis. The paper makes a case for the use of short fixed segments to do statewide screening and how accurately geocoded crash data is key to its use. A comparison is made with a sliding-window approach (Network Kernel Density). The benefit of using fixed segments is that they are much less complex than using the sliding-window approach. Because the segmentation can be the same from year to year, direct comparisons can be made over time while spatial integrity is maintained.


1979 ◽  
Vol 25 (4) ◽  
pp. 582-584 ◽  
Author(s):  
D. L. Peterson ◽  
G. L. Rolfe

Abstract Variation in throughfall collection data is a major concern in nutrient cycling studies. In order to determine the magnitude of this variability in throughfall volume data collected in an oak-hickory stand in southern Illinois, regression equations were developed which indicate the sample size necessary for a specific level of variability. In addition to having predictive value, these equations indicate differences in variability on a seasonal basis. Forest Sci. 25:582-584.


2021 ◽  
Vol 13 (11) ◽  
pp. 6214
Author(s):  
Bumjoon Bae ◽  
Changju Lee ◽  
Tae-Young Pak ◽  
Sunghoon Lee

Aggregation of spatiotemporal data can encounter potential information loss or distort attributes via individual observation, which would influence modeling results and lead to an erroneous inference, named the ecological fallacy. Therefore, deciding spatial and temporal resolution is a fundamental consideration in a spatiotemporal analysis. The modifiable temporal unit problem (MTUP) occurs when using data that is temporally aggregated. While consideration of the spatial dimension has been increasingly studied, the counterpart, a temporal unit, is rarely considered, particularly in the traffic safety modeling field. The purpose of this research is to identify the MTUP effect in crash-frequency modeling using data with various temporal scales. A sensitivity analysis framework is adopted with four negative binomial regression models and four random effect negative binomial models having yearly, quarterly, monthly, and weekly temporal units. As the different temporal unit was applied, the result of the model estimation also changed in terms of the mean and significance of the parameter estimates. Increasing temporal correlation due to using the small temporal unit can be handled with the random effect models.


Author(s):  
Darren J. Torbic ◽  
Daniel Cook ◽  
Joseph Grotheer ◽  
Richard Porter ◽  
Jeffrey Gooch ◽  
...  

The objective of this research was to develop new intersection crash prediction models for consideration in the second edition of the Highway Safety Manual (HSM), consistent with existing methods in HSM Part C and comprehensive in their ability to address a wide range of intersection configurations and traffic control types in rural and urban areas. The focus of the research was on developing safety performance functions (SPFs) for intersection configurations and traffic control types not currently addressed in HSM Part C. SPFs were developed for the following general intersection configurations and traffic control types: rural and urban all-way stop-controlled intersections; rural three-leg intersections with signal control; intersections on high-speed urban and suburban arterials (i.e., arterials with speed limits greater than or equal to 50 mph); urban five-leg intersections with signal control; three-leg intersections where the through movements make turning maneuvers at the intersections; crossroad ramp terminals at single-point diamond interchanges; and crossroad ramp terminals at tight diamond interchanges. Development of severity distribution functions (SDFs) for use in combination with SPFs to estimate crash severity as a function of geometric design elements and traffic control features was explored; but owing to challenges and inconsistencies in developing and interpreting the SDFs, it was recommended for the second edition of the HSM that crash severity for the new intersection configurations and traffic control types be addressed in a manner consistent with existing methods in Chapters 10, 11, and 12 of the first edition, without use of SDFs.


2018 ◽  
Vol 90 (2) ◽  
pp. 1705-1715 ◽  
Author(s):  
MARCOS TOEBE ◽  
LETÍCIA N. MACHADO ◽  
FRANCIELI L. TARTAGLIA ◽  
JULIANA O. DE CARVALHO ◽  
CIRINEU T. BANDEIRA ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document