stop sign
Recently Published Documents


TOTAL DOCUMENTS

152
(FIVE YEARS 51)

H-INDEX

13
(FIVE YEARS 3)

Author(s):  
Shan Jiang ◽  
David Allison ◽  
Andrew T. Duchowski

Background: Navigating large hospitals can be very challenging due to the functional complexity as well as the evolving changes and expansions of such facilities. Hospital wayfinding issues could lead to stress, negative mood, and poor healthcare experience among patients, staff, and family members. Objectives: A survey-embedded experiment was conducted using immersive virtual environment (IVE) techniques to explore people’s wayfinding performance and their mood and spatial experience in hospital circulation spaces with or without visible greenspaces. Methods: Seventy-four participants were randomly assigned to either group to complete wayfinding tasks in a timed session. Participants’ wayfinding performances were interpreted using several indicators, including task completion, duration, walking distance, stop, sign-viewing, and route selection. Participants’ mood states and perceived environmental attractiveness and atmosphere were surveyed; their perceived levels of presence in the IVE hospitals were also reported. Results: The results revealed that participants performed better on high complexity wayfinding tasks in the IVE hospital with visible greenspaces, as indicated by less time consumed and shorter walking distance to find the correct destination, less frequent stops and sign viewing, and more efficient route selection. Participants also experienced enhanced mood states and favorable spatial experience and perceived aesthetics in the IVE hospital with visible greenspaces than the same environment without window views. IVE techniques could be an efficient tool to supplement environment-behavior studies with certain conditions noted. Conclusions: Hospital greenspaces located at key decision points could serve as landmarks that positively attract people’s attention, aid wayfinding, and improve their navigational experience.


SLEEP ◽  
2021 ◽  
Author(s):  
David J Sandness ◽  
Stuart J McCarter ◽  
Lucas G Dueffert ◽  
Paul W Shepard ◽  
Ashley M Enke ◽  
...  

Abstract Study Objectives To analyze cognitive deficits leading to unsafe driving in patients with REM Sleep Behavior Disorder (RBD), strongly associated with cognitive impairment and synucleinopathy-related neurodegeneration. Methods Twenty isolated RBD (iRBD), 10 symptomatic RBD (sRBD), and 20 age- and education-matched controls participated in a prospective case-control driving simulation study. Group mean differences were compared with correlations between cognitive and driving-safety measures. Results iRBD and sRBD patients were more cognitively impaired than controls in global neurocognitive functioning, processing speeds, visuospatial attention, and distractibility (p<0.05). sRBD patients drove slower with more collisions than iRBD patients and controls (p<0.05), required more warnings, and had greater difficulty following and matching speed of a lead car during simulated car-following tasks (p<0.05). Driving-safety measures were similar between iRBD patients and controls. Slower psychomotor speed correlated with more off-road accidents (r=0.65) while processing speed (-0.88), executive function (-0.90) and visuospatial impairment (0.74) correlated with safety warnings in sRBD patients. Slower stimulus recognition was associated with more signal-light (0.64) and stop-sign (0.56) infractions in iRBD patients. Conclusions iRBD and sRBD patients have greater selective cognitive impairments than controls, particularly visuospatial abilities and processing speed. sRBD patients exhibited unsafe driving behaviors, associated with processing speed, visuospatial awareness, and attentional impairments. Our results suggest that iRBD patients have similar driving-simulator performance as healthy controls but that driving capabilities regress as RBD progresses to symptomatic RBD with overt signs of cognitive, autonomic, and motor impairment. Longitudinal studies with serial driving simulator evaluations and objective on-road driving performance are needed.


Author(s):  
Katherine Garcia ◽  
Ian Robertson ◽  
Philip Kortum

The purpose of this study is to compare presentation methods for use in the validation of the Trust in Selfdriving Vehicle Scale (TSDV), a questionnaire designed to assess user trust in self-driving cars. Previous studies have validated trust instruments using traditional videos wherein participants watch a scenario involving an automated system but there are strong concerns about external validity with this approach. We examined four presentation conditions: a flat screen monitor with a traditional video, a flat screen with a 2D 180 video, an Oculus Go VR headset with a 2D 180 video, and an Oculus Go with a 3D VR video. Participants watched eight video scenarios of a self-driving vehicle attempting a right-hand tum at a stop sign and rated their trust in the vehicle shown in the video after each scenario using the TSDV and rated telepresence for the viewing condition. We found a significant interaction between the mean TSDV scores for pedestrian collision and presentation condition. The TSDV mean in the Headset 2D 180 condition was significantly higher than the other three conditions. Additionally, when used to view the scenarios as 3D VR videos, the headset received significantly higher ratings of spatial presence compared to the condition using a flatscreen a 2D video; none of the remaining comparisons were statistically significant. Based on the results it is not recommended that the headset be used for short scenarios because the benefits do not outweigh the costs.


Author(s):  
Scott Mishler ◽  
Katherine Garcia ◽  
Erin Fuller-Jakaitis ◽  
Cong Wang ◽  
Bin Hu ◽  
...  
Keyword(s):  

Author(s):  
Khondoker Billah ◽  
Hatim O. Sharif ◽  
Samer Dessouky

Bicycling is inexpensive, environmentally friendly, and healthful; however, bicyclist safety is a rising concern. This study investigates bicycle crash-related key variables that might substantially differ in terms of the party at fault and bicycle facility presence. Employing 5 year (2014–2018) data from the Texas Crash Record and Information System database, the effect of these variables on bicyclist injury severity was assessed for San Antonio, Texas, using bivariate analysis and binary logistic regression. Severe injury risk based on the party at fault and bicycle facility presence varied significantly for different crash-related variables. The strongest predictors of severe bicycle injury include bicyclist age and ethnicity, lighting condition, road class, time of occurrence, and period of week. Driver inattention and disregard of stop sign/light were the primary contributing factors to bicycle-vehicle crashes. Crash density heatmap and hotspot analyses were used to identify high-risk locations. The downtown area experienced the highest crash density, while severity hotspots were located at intersections outside of the downtown area. This study recommends the introduction of more dedicated/protected bicycle lanes, separation of bicycle lanes from the roadway, mandatory helmet use ordinance, reduction in speed limit, prioritization of resources at high-risk locations, and implementation of bike-activated signal detection at signalized intersections.


2021 ◽  
Author(s):  
Kostas Alexandridis

We provide an integrated and systematic automation approach to spatial object recognition and positional detection using AI machine learning and computer vision algorithms for Orange County, California. We describe a comprehensive methodology for multi-sensor, high-resolution field data acquisition, along with post-field processing and pre-analysis processing tasks. We developed a series of algorithmic formulations and workflows that integrate convolutional deep neural network learning with detected object positioning estimation in 360 degree equirectancular photosphere imagery. We provide examples of application processing more than 800 thousand cardinal directions in photosphere images across two areas in Orange County, and present detection results for stop-sign and fire hydrant object recognition. We discuss the efficiency and effectiveness of our approach, along with broader inferences related to the performance and implications of this approach for future technological innovations, including automation of spatial data and public asset inventories, and near real-time AI field data systems.


2021 ◽  
Author(s):  
Kostas Alexandridis

We provide an integrated and systematic automation approach to spatial object recognition and positional detection using AI machine learning and computer vision algorithms for Orange County, California. We describe a comprehensive methodology for multi-sensor, high-resolution field data acquisition, along with post-field processing and pre-analysis processing tasks. We developed a series of algorithmic formulations and workflows that integrate convolutional deep neural network learning with detected object positioning estimation in 360\textdegree~equirectancular photosphere imagery. We provide examples of application processing more than 800 thousand cardinal directions in photosphere images across two areas in Orange County, and present detection results for stop-sign and fire hydrant object recognition. We discuss the efficiency and effectiveness of our approach, along with broader inferences related to the performance and implications of this approach for future technological innovations, including automation of spatial data and public asset inventories, and near real-time AI field data systems.


2021 ◽  
Author(s):  
Kostas Alexandridis

We provide an integrated and systematic automation approach to spatial object recognition and positional detection using AI machine learning and computer vision algorithms for Orange County, California. We describe a comprehensive methodology for multi-sensor, high-resolution field data acquisition, along with post-field processing and pre-analysis processing tasks. We developed a series of algorithmic formulations and workflows that integrate convolutional deep neural network learning with detected object positioning estimation in 360\textdegree~equirectancular photosphere imagery. We provide examples of application processing more than 800 thousand cardinal directions in photosphere images across two areas in Orange County, and present detection results for stop-sign and fire hydrant object recognition. We discuss the efficiency and effectiveness of our approach, along with broader inferences related to the performance and implications of this approach for future technological innovations, including automation of spatial data and public asset inventories, and near real-time AI field data systems.


2021 ◽  
Vol 118 (33) ◽  
pp. e2020192118
Author(s):  
Judy Sein Kim ◽  
Brianna Aheimer ◽  
Verónica Montané Manrara ◽  
Marina Bedny

Empiricist philosophers such as Locke famously argued that people born blind might learn arbitrary color facts (e.g., marigolds are yellow) but would lack color understanding. Contrary to this intuition, we find that blind and sighted adults share causal understanding of color, despite not always agreeing about arbitrary color facts. Relative to sighted people, blind individuals are less likely to generate “yellow” for banana and “red” for stop sign but make similar generative inferences about real and novel objects’ colors, and provide similar causal explanations. For example, people infer that two natural kinds (e.g., bananas) and two artifacts with functional colors (e.g., stop signs) are more likely to have the same color than two artifacts with nonfunctional colors (e.g., cars). People develop intuitive and inferentially rich “theories” of color regardless of visual experience. Linguistic communication is more effective at aligning intuitive theories than knowledge of arbitrary facts.


Sign in / Sign up

Export Citation Format

Share Document