Detection of Stop Sign Violations From Dashcam Data

Author(s):  
Luca Bravi ◽  
Luca Kubin ◽  
Stefano Caprasecca ◽  
Douglas Coimbra de Andrade ◽  
Matteo Simoncini ◽  
...  
Keyword(s):  
Author(s):  
Brittany N. Campbell ◽  
John D. Smith ◽  
Wassim G. Najm
Keyword(s):  

Author(s):  
Yingfeng (Eric) Li ◽  
Haiyan Hao ◽  
Ronald B. Gibbons ◽  
Alejandra Medina

Even though drivers disregarding a stop sign is widely considered a major contributing factor for crashes at unsignalized intersections, an equally important problem that leads to severe crashes at such locations is misjudgment of gaps. This paper presents the results of an effort to fully understand gap acceptance behavior at unsignalized intersections using SHPR2 Naturalistic Driving Study data. The paper focuses on the findings of two research activities: the identification of critical gaps for common traffic/roadway scenarios at unsignalized intersections, and the investigation of significant factors affecting driver gap acceptance behaviors at such intersections. The study used multiple statistical and machine learning methods, allowing a comprehensive understanding of gap acceptance behavior while demonstrating the advantages of each method. Overall, the study showed an average critical gap of 5.25 s for right-turn and 6.19 s for left-turn movements. Although a variety of factors affected gap acceptance behaviors, gap size, wait time, major-road traffic volume, and how frequently the driver drives annually were examples of the most significant.


2007 ◽  
Vol 27 (2) ◽  
pp. 27-35 ◽  
Author(s):  
Angela R. Lebbon ◽  
John Austin ◽  
Ron Van Houten ◽  
Louis E. Malenfant
Keyword(s):  

Author(s):  
Katherine Garcia ◽  
Ian Robertson ◽  
Philip Kortum

The purpose of this study is to compare presentation methods for use in the validation of the Trust in Selfdriving Vehicle Scale (TSDV), a questionnaire designed to assess user trust in self-driving cars. Previous studies have validated trust instruments using traditional videos wherein participants watch a scenario involving an automated system but there are strong concerns about external validity with this approach. We examined four presentation conditions: a flat screen monitor with a traditional video, a flat screen with a 2D 180 video, an Oculus Go VR headset with a 2D 180 video, and an Oculus Go with a 3D VR video. Participants watched eight video scenarios of a self-driving vehicle attempting a right-hand tum at a stop sign and rated their trust in the vehicle shown in the video after each scenario using the TSDV and rated telepresence for the viewing condition. We found a significant interaction between the mean TSDV scores for pedestrian collision and presentation condition. The TSDV mean in the Headset 2D 180 condition was significantly higher than the other three conditions. Additionally, when used to view the scenarios as 3D VR videos, the headset received significantly higher ratings of spatial presence compared to the condition using a flatscreen a 2D video; none of the remaining comparisons were statistically significant. Based on the results it is not recommended that the headset be used for short scenarios because the benefits do not outweigh the costs.


1999 ◽  
Vol 89 (3_suppl) ◽  
pp. 1193-1194 ◽  
Author(s):  
John Trinkaus
Keyword(s):  

2019 ◽  
Vol 09 (01) ◽  
pp. 95-108
Author(s):  
Amrita Goswamy ◽  
Shauna Hallmark ◽  
Guillermo Basulto-Elias ◽  
Michael Pawlovich

2021 ◽  
Vol 26 (3) ◽  
pp. 283-295
Author(s):  
Nathaniel L. Foster ◽  
Gregory R. Bell

We examined incidental learning of road signs under divided attention in a simulated naturalistic environment. We tested whether word-based versus symbol-based road signs were differentially maintained in working memory by dividing attention during encoding and measuring the effect on long-term memory. Participants in a lab watched a video from the point of view of a car driving the streets of a small town. Participants were instructed to indicate whether passing road signs in the video were on the left or right side of the street while either singing the Star-Spangled Banner (phonological divided attention) or describing familiar locations (visuospatial divided attention). For purposes of analysis, road signs were categorized as word signs if they contained words (e.g., a STOP sign) or as symbol signs if they contained illustrations or symbols (e.g., a pedestrian crosswalk sign). A surprise free recall test of the road signs indicated greater recall for word signs than symbol signs, and greater recall of signs for the phonological divided attention group than the visuospatial divided attention group. Critically, the proportion of correct recall of symbol signs was significantly lower for the visuospatial divided attention group than the phonological divided attention group, p = .02, d = 0.63, but recall for word signs was not significantly different between phonological and visuospatial groups, p = .09, d = 0.44. Results supported the hypothesis that visuospatial information—but not phonological information—is stored in working memory in a simulated naturalistic environment that involved incidental learning.


2020 ◽  
Author(s):  
Judy Sein Kim ◽  
Brianna Aheimer ◽  
Veronica Montane Manrara ◽  
Marina Bedny

Empiricist philosophers such as Locke famously argued that people born blind could only acquire shallow, fragmented facts about color. Contrary to this intuition, we report that blind and sighted people share an in-depth understanding of color, despite disagreeing about arbitrary color facts. Relative to the sighted, blind individuals are less likely to generate ‘yellow’ for banana and ‘red’ for stop-sign. However, blind and sighted adults are equally likely to infer that two bananas (natural kinds) and two stop-signs (artifacts with functional colors) are more likely to have the same color than two cars (artifacts with non-functional colors), make similar inferences about novel objects’ colors, and provide similar causal explanations. We argue that people develop inferentially-rich and intuitive “theories” of color regardless of visual experience. Linguistic communication is more effective at aligning people’s theories than their knowledge of verbal facts.


Author(s):  
Andrew Gilbey ◽  
Kawtar Tani
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document