Case Study using Probe Vehicle Speeds to Assess Roadway Safety in Georgia

Author(s):  
David J. Ederer ◽  
Michael O. Rodgers ◽  
Michael P. Hunter ◽  
Kari E. Watkins

Speed is a primary risk factor for road crashes and injuries. Previous research has attempted to ascertain the relationship between individual vehicle speeds, aggregated speeds, and crash frequency on roadways. Although there is a large body of research linking vehicle speeds to safety outcomes, there is not a widely applied performance metric for safety based on regularly reported speeds. With the increasingly widespread availability of probe vehicle speed data, there is an opportunity to develop network-level safety performance metrics. This analysis examined the relationship between percentile speeds and crashes on a principal arterial in Metropolitan Atlanta. This study used data from the National Performance Metric Research Data Set (NPMRDS), the Georgia Electronic Accident Reporting System, and the Highway Performance Monitoring System. Negative binomial regression models were used to analyze the relationship between speed percentiles, and speed differences to crash frequency on roadway sections. Results suggested that differences in speed percentiles, a measure of speed dispersion, are related to the frequency of crashes. Based on the models, the difference in the 85th percentile and median speed is proposed as a performance metric. This difference is easily measured using NPMRDS probe vehicle speeds, and provides a practical performance metric for assessing safety on roadways.

Author(s):  
Yichuan Peng ◽  
Srinivas Reddy Geedipally ◽  
Dominique Lord

One of the most important tasks in traffic safety is investigating the relationship between motor vehicle crashes and the geometric characteristics of roadways. A large body of previous work provides meaningful results on the impact of geometric design on crash frequency. However, little attention has been paid to the relationship between roadway departure crashes and relevant roadside features such as lateral clearance, side slope condition, and driveway density. The lack of roadside data for use in estimating rigorous statistical models has been a major obstacle to roadside safety research for many years. This study investigated the relationship between single-vehicle roadway departure crashes and roadside features. Two types of models were developed: a negative binomial model of crash frequency and a multinomial logit model of crash severity. The study used field data collected in four districts in Texas. The results showed that shoulder width, lateral clearance, and side slope condition had a significant effect on roadway departure crashes. Crash frequency and severity increased when lateral clearance or shoulder width decreased and when the side slope condition became worse. Driveway density was not found to have a significant influence on crash frequency or severity.


2021 ◽  
Vol 11 (3) ◽  
pp. 1225
Author(s):  
Woohyong Lee ◽  
Jiyoung Lee ◽  
Bo Kyung Park ◽  
R. Young Chul Kim

Geekbench is one of the most referenced cross-platform benchmarks in the mobile world. Most of its workloads are synthetic but some of them aim to simulate real-world behavior. In the mobile world, its microarchitectural behavior has been reported rarely since the hardware profiling features are limited to the public. As a popular mobile performance workload, it is hard to find Geekbench’s microarchitecture characteristics in mobile devices. In this paper, a thorough experimental study of Geekbench performance characterization is reported with detailed performance metrics. This study also identifies mobile system on chip (SoC) microarchitecture impacts, such as the cache subsystem, instruction-level parallelism, and branch performance. After the study, we could understand the bottleneck of workloads, especially in the cache sub-system. This means that the change of data set size directly impacts performance score significantly in some systems and will ruin the fairness of the CPU benchmark. In the experiment, Samsung’s Exynos9820-based platform was used as the tested device with Android Native Development Kit (NDK) built binaries. The Exynos9820 is a superscalar processor capable of dual issuing some instructions. To help performance analysis, we enable the capability to collect performance events with performance monitoring unit (PMU) registers. The PMU is a set of hardware performance counters which are built into microprocessors to store the counts of hardware-related activities. Throughout the experiment, functional and microarchitectural performance profiles were fully studied. This paper describes the details of the mobile performance studies above. In our experiment, the ARM DS5 tool was used for collecting runtime PMU profiles including OS-level performance data. After the comparative study is completed, users will understand more about the mobile architecture behavior, and this will help to evaluate which benchmark is preferable for fair performance comparison.


2019 ◽  
Vol 11 (23) ◽  
pp. 6643 ◽  
Author(s):  
Lee ◽  
Guldmann ◽  
Choi

As a characteristic of senior drivers aged 65 +, the low-mileage bias has been reported in previous studies. While it is thought to be a well-known phenomenon caused by aging, the characteristics of urban environments create more opportunities for crashes. This calls for investigating the low-mileage bias and scrutinizing whether it has the same impact on other age groups, such as young and middle-aged drivers. We use a crash database from the Ohio Department of Public Safety from 2006 to 2011 and adopt a macro approach using Negative Binomial models and Conditional Autoregressive (CAR) models to deal with a spatial autocorrelation issue. Aside from the low-mileage bias issue, we examine the association between the number of crashes and the built environment and socio-economic and demographic factors. We confirm that the number of crashes is associated with vehicle miles traveled, which suggests that more accumulated driving miles result in a lower likelihood of being involved in a crash. This implies that drivers in the low mileage group are involved in crashes more often, regardless of the driver’s age. The results also confirm that more complex urban environments have a higher number of crashes than rural environments.


2019 ◽  
Vol 46 (7) ◽  
pp. 1319-1331 ◽  
Author(s):  
Simplice Asongu ◽  
Nicholas M. Odhiambo

Purpose The purpose of this paper is to examine the relationship between tourism and social media from a cross section of 138 countries with data for the year 2012. Design/methodology/approach The empirical evidence is based on Ordinary Least Squares, Negative Binomial and Quantile Regressions. Findings Two main findings are established. First, there is a positive relationship between Facebook penetration and the number of tourist arrivals. Second, Facebook penetration is more relevant in promoting tourist arrivals in countries where initial levels in tourist arrivals are the highest and low. The established positive relationship can be elucidated from four principal angles: the transformation of travel research, the rise in social sharing, improvements in customer service and the reshaping of travel agencies. Originality/value This study explores a new data set on social media. There are very few empirical studies on the relevance of social media in development outcomes.


2018 ◽  
Vol 19 (3) ◽  
pp. 675-689 ◽  
Author(s):  
Akshita Arora ◽  
Shernaz Bodhanwala

The Indian corporate governance norms have been evolving over a period of time but limited number of studies have been undertaken with reference to corporate governance index (CGI) in the Indian context. The study aims to examine the relationship between CGI and firm performance. We construct CGI using important parameters of governance such as board structure, ownership structure, market for corporate control and market competition. Our panel data set comprises of listed firms and the estimation analysis has been carried out using random effects method. The study reveals significant positive relationship between CGI and firm performance metrics. CGI is an important and causal factor in explaining firm performance. The investors would also have positive perception about business firms maintaining high governance standards, thus reducing possible funding costs.


2016 ◽  
Vol 5 (3) ◽  
pp. 325-342 ◽  
Author(s):  
Trey Malone ◽  
Jayson L. Lusk

Purpose While previous studies have looked at the negative consequences of beer drinking often as a prelude to discussing benefits of laws that curtail consumption, the purpose of this paper is to understand the downside of such regulations insofar as reducing entrepreneurial activity in the brewing industry. Design/methodology/approach Using a unique data set from the Brewers’ Association that contains information on the number and type of brewery in each county, this study explores the relationship between the number of breweries and regulations targeted at the brewing industry. Zero-inflated negative binomial regressions are used to determine the relationship between the number of microbreweries and brewpubs per county and state beer taxes, self-distribution legislation, and on-premises sales. Findings The authors find that allowing breweries to sell beers on-premises as well as allowing for breweries to self-distribute have statistically significant relationships with the number of microbreweries, brewpubs, and breweries. The authors do not find an economically significant relationship between state excise taxes and the number of breweries of any type. Originality/value Results suggest that whatever public health benefits are brought about by alcohol laws, they are not a free lunch, as they may hinder entrepreneurial development.


2017 ◽  
Author(s):  
Phillip G. D. Ward ◽  
Nicholas J. Ferris ◽  
Parnesh Raniga ◽  
David L. Dowe ◽  
Amanda C. L. Ng ◽  
...  

AbstractPurposeTo improve the accuracy of automated vein segmentation by combining susceptibility-weighted images (SWI), quantitative susceptibility maps (QSM), and a vein atlas to produce a resultant image called a composite vein image (CV image).MethodAn atlas was constructed in common space from 1072 manually traced 2D-slices. The composite vein image was derived for each subject as a weighted sum of three inputs; a SWI image, a QSM image and the vein atlas. The weights for each input and each anatomical location, called template priors, were derived by assessing the accuracy of each input over an independent data set. The accuracy of venograms derived automatically from each of the CV image, SWI, and QSM image sets was assessed by comparison with manual tracings. Three different automated vein segmentation techniques were used, and ten performance metrics evaluated.ResultsVein segmentations using the CV image were comprehensively better than those derived from SWI or QSM images (mean Cohen’s d = 1.1). Sixty permutations of performance metric and automated segmentation technique were evaluated. Vein identification improvements that were both large and significant (Cohen’s d>0.80, p<0.05) were found in 77% of the permutations, compared to no improvement in 5%.ConclusionThe accuracy of automated venograms derived from the composite vein image was overwhelmingly superior to venograms derived from SWI or QSM alone.


Author(s):  
Jauwairia Nasir ◽  
Barbara Bruno ◽  
Mohamed Chetouani ◽  
Pierre Dillenbourg

AbstractIn educational HRI, it is generally believed that a robots behavior has a direct effect on the engagement of a user with the robot, the task at hand and also their partner in case of a collaborative activity. Increasing this engagement is then held responsible for increased learning and productivity. The state of the art usually investigates the relationship between the behaviors of the robot and the engagement state of the user while assuming a linear relationship between engagement and the end goal: learning. However, is it correct to assume that to maximise learning, one needs to maximise engagement? Furthermore, conventional supervised models of engagement require human annotators to get labels. This is not only laborious but also introduces further subjectivity in an already subjective construct of engagement. Can we have machine-learning models for engagement detection where annotations do not rely on human annotators? Looking deeper at the behavioral patterns and the learning outcomes and a performance metric in a multi-modal data set collected in an educational human–human–robot setup with 68 students, we observe a hidden link that we term as Productive Engagement. We theorize a robot incorporating this knowledge will (1) distinguish teams based on engagement that is conducive of learning; and (2) adopt behaviors that eventually lead the users to increased learning by means of being productively engaged. Furthermore, this seminal link paves way for machine-learning models in educational HRI with automatic labelling based on the data.


Author(s):  
Yuanchang Xie ◽  
Yunlong Zhang

Recent crash frequency studies have been based primarily on generalized linear models, in which a linear relationship is usually assumed between the logarithm of expected crash frequency and other explanatory variables. For some explanatory variables, such a linear assumption may be invalid. It is therefore worthwhile to investigate other forms of relationships. This paper introduces generalized additive models to model crash frequency. Generalized additive models use smooth functions of each explanatory variable and are very flexible in modeling nonlinear relationships. On the basis of an intersection crash frequency data set collected in Toronto, Canada, a negative binomial generalized additive model is compared with two negative binomial generalized linear models. The comparison results show that the negative binomial generalized additive model performs best for both the Akaike information criterion and the fitting and predicting performance.


Safety ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. 3
Author(s):  
Peter Wagner ◽  
Ragna Hoffmann ◽  
Andreas Leich

This work analyzes the relationship between crash frequency N (crashes per hour) and exposure Q (cars per hour) on the macroscopic level of a whole city. As exposure, the traffic flow is used here. Therefore, it analyzes a large crash database of the city of Berlin, Germany, together with a novel traffic flow database. Both data display a strong weekly pattern, and, if taken together, show that the relationship N(Q) is not a linear one. When Q is small, N grows like a second-order polynomial, while at large Q there is a tendency towards saturation, leading to an S-shaped relationship. Although visible in all data from all crashes, the data for the severe crashes display a less prominent saturation. As a by-product, the analysis performed here also demonstrates that the crash frequencies follow a negative binomial distribution, where both parameters of the distribution depend on the hour of the week, and, presumably, on the traffic state in this hour. The work presented in this paper aims at giving the reader a better understanding on how crash rates depend on exposure.


Sign in / Sign up

Export Citation Format

Share Document