scholarly journals AADS: Augmented autonomous driving simulation using data-driven algorithms

2019 ◽  
Vol 4 (28) ◽  
pp. eaaw0863 ◽  
Author(s):  
W. Li ◽  
C. W. Pan ◽  
R. Zhang ◽  
J. P. Ren ◽  
Y. X. Ma ◽  
...  

Simulation systems have become essential to the development and validation of autonomous driving (AD) technologies. The prevailing state-of-the-art approach for simulation uses game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (the assets for simulation) remain manual tasks that can be costly and time consuming. In addition, CG images still lack the richness and authenticity of real-world images, and using CG images for training leads to degraded performance. Here, we present our augmented autonomous driving simulation (AADS). Our formulation augmented real-world pictures with a simulated traffic flow to create photorealistic simulation images and renderings. More specifically, we used LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generated plausible traffic flows for cars and pedestrians and composed them into the background. The composite images could be resynthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photorealistic, fully annotated, and ready for training and testing of AD systems from perception to planning. We explain our system design and validate our algorithms with a number of AD tasks from detection to segmentation and predictions. Compared with traditional approaches, our method offers scalability and realism. Scalability is particularly important for AD simulations, and we believe that real-world complexity and diversity cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility of a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation.

Author(s):  
Joseph K. Muguro ◽  
Pringgo Widyo Laksono ◽  
Yuta Sasatake ◽  
Kojiro Matsushita ◽  
Minoru Sasaki

As Automated Driving Systems (ADS) technology gets assimilated into the market, the driver’s obligation will be changed to a supervisory role. A key point to consider is the driver’s engagement in the secondary task to maintain the driver/user in the control loop. The paper’s objective is to monitor driver engagement with a game and identify any impacts the task has on hazard recognition. We designed a driving simulation using Unity3D and incorporated three tasks: No-task, AR-Video, and AR-Game tasks. The driver engaged in an AR object interception game while monitoring the road for threatening road scenarios. From the results, there was less than 1 second difference between the means of gaming task (mean = 2.55s, std = 0.1002s) to no-task (mean = 2.55s, std = 0.1002s). Game scoring followed three profiles/phases: learning, saturation, and decline profile. From the profiles, it is possible to quantify/infer drivers’ engagement with the game task. The paper proposes alternative monitoring that has utility, i.e., entertaining the user. Further experiments AR-Game focusing on real-world car environment will be performed to confirm the performance following the recommendations derived from the current test.


Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 694
Author(s):  
Mingyun Wen ◽  
Jisun Park ◽  
Yunsick Sung ◽  
Yong Woon Park ◽  
Kyungeun Cho

Recently, virtual environment-based techniques to train sensor-based autonomous driving models have been widely employed due to their efficiency. However, a simulated virtual environment is required to be highly similar to its real-world counterpart to ensure the applicability of such models to actual autonomous vehicles. Though advances in hardware and three-dimensional graphics engine technology have enabled the creation of realistic virtual driving environments, the myriad of scenarios occurring in the real world can only be simulated up to a limited extent. In this study, a scenario simulation and modeling framework that simulates the behavior of objects that may be encountered while driving is proposed to address this problem. This framework maximizes the number of scenarios, their types, and the driving experience in a virtual environment. Furthermore, a simulator was implemented and employed to evaluate the performance of the proposed framework.


Author(s):  
Roy C. Davies ◽  
Gerd Johansson ◽  
Anita Linden ◽  
Kersin Boschian ◽  
Berigt Sonesson ◽  
...  

Author(s):  
Michal Kafri ◽  
Patrice L. Weiss ◽  
Gabriel Zeilig ◽  
Moshe Bondi ◽  
Ilanit Baum-Cohen ◽  
...  

Abstract Background Virtual reality (VR) enables objective and accurate measurement of behavior in ecologically valid and safe environments, while controlling the delivery of stimuli and maintaining standardized measurement protocols. Despite this potential, studies that compare virtual and real-world performance of complex daily activities are scarce. This study aimed to compare cognitive strategies and gait characteristics of young and older healthy adults as they engaged in a complex task while navigating in a real shopping mall and a high-fidelity virtual replica of the mall. Methods Seventeen older adults (mean (SD) age = 71.2 (5.6) years, 64% males) and 17 young adults (26.7 (3.7) years, 35% males) participated. In two separate sessions they performed the Multiple Errands Test (MET) in a real-world mall or the Virtual MET (VMET) in the virtual environment. The real-world environment was a small shopping area and the virtual environment was created within the CAREN™ (Computer Assisted Rehabilitation Environment) Integrated Reality System. The performance of the task was assessed using motor and physiological measures (gait parameters and heart rate), MET or VMET time and score, and navigation efficiency (cognitive performance and strategy). Between (age groups) and within (environment) differences were analyzed with ANOVA repeated measures. Results There were no significant age effects for any of the gait parameters but there were significant environment effects such that both age groups walked faster (F(1,32) = 154.96, p < 0.0001) with higher step lengths (F(1,32) = 86.36, p < 0.0001), had lower spatial and temporal gait variability (F(1,32) = 95.71–36.06, p < 0.0001) and lower heart rate (F(1,32) = 13.40, p < 0.01) in the real-world. There were significant age effects for MET/VMET scores (F(1,32) = 19.77, p < 0.0001) and total time (F(1,32) = 11.74, p < 0.05) indicating better performance of the younger group, and a significant environment effect for navigation efficiency (F(1,32) = 7.6, p < 0.01) that was more efficient in the virtual environment. Conclusions This comprehensive, ecological approach in the measurement of performance during tasks reminiscent of complex life situations showed the strengths of using virtual environments in assessing cognitive aspects and limitations of assessing motor aspects of performance. Difficulties by older adults were apparent mainly in the cognitive aspects indicating a need to evaluate them during complex task performance.


Author(s):  
Walter Morales Alvarez ◽  
Francisco Miguel Moreno ◽  
Oscar Sipele ◽  
Nikita Smirnov ◽  
Cristina Olaverri-Monreal

2020 ◽  
Vol 13 ◽  
pp. 175628642092268 ◽  
Author(s):  
Francesco Patti ◽  
Andrea Visconti ◽  
Antonio Capacchione ◽  
Sanjeev Roy ◽  
Maria Trojano ◽  
...  

Background: The CLARINET-MS study assessed the long-term effectiveness of cladribine tablets by following patients with multiple sclerosis (MS) in Italy, using data from the Italian MS Registry. Methods: Real-world data (RWD) from Italian MS patients who participated in cladribine tablets randomised clinical trials (RCTs; CLARITY, CLARITY Extension, ONWARD or ORACLE-MS) across 17 MS centres were obtained from the Italian MS Registry. RWD were collected during a set observation period, spanning from the last dose of cladribine tablets during the RCT (defined as baseline) to the last visit date in the registry, treatment switch to other disease-modifying drugs, date of last Expanded Disability Status Scale recording or date of the last relapse (whichever occurred last). Time-to-event analysis was completed using the Kaplan–Meier (KM) method. Median duration and associated 95% confidence intervals (CI) were estimated from the model. Results: Time span under observation in the Italian MS Registry was 1–137 (median 80.3) months. In the total Italian patient population ( n = 80), the KM estimates for the probability of being relapse-free at 12, 36 and 60 months after the last dose of cladribine tablets were 84.8%, 66.2% and 57.2%, respectively. The corresponding probability of being progression-free at 60 months after the last dose was 63.7%. The KM estimate for the probability of not initiating another disease-modifying treatment at 60 months after the last dose of cladribine tablets was 28.1%, and the median time-to-treatment change was 32.1 (95% CI 15.5–39.5) months. Conclusion: CLARINET-MS provides an indirect measure of the long-term effectiveness of cladribine tablets. Over half of MS patients analysed did not relapse or experience disability progression during 60 months of follow-up from the last dose, suggesting that cladribine tablets remain effective in years 3 and 4 after short courses at the beginning of years 1 and 2.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 875
Author(s):  
Kerri Beckmann ◽  
Hans Garmo ◽  
Ingela Franck Lissbrant ◽  
Pär Stattin

Real-world data (RWD), that is, data from sources other than controlled clinical trials, play an increasingly important role in medical research. The development of quality clinical registers, increasing access to administrative data sources, growing computing power and data linkage capacities have contributed to greater availability of RWD. Evidence derived from RWD increases our understanding of prostate cancer (PCa) aetiology, natural history and effective management. While randomised controlled trials offer the best level of evidence for establishing the efficacy of medical interventions and making causal inferences, studies using RWD offer complementary evidence about the effectiveness, long-term outcomes and safety of interventions in real-world settings. RWD provide the only means of addressing questions about risk factors and exposures that cannot be “controlled”, or when assessing rare outcomes. This review provides examples of the value of RWD for generating evidence about PCa, focusing on studies using data from a quality clinical register, namely the National Prostate Cancer Register (NPCR) Sweden, with longitudinal data on advanced PCa in Patient-overview Prostate Cancer (PPC) and data linkages to other sources in Prostate Cancer data Base Sweden (PCBaSe).


2016 ◽  
Vol 3 (7) ◽  
pp. 160131 ◽  
Author(s):  
Daniel Smith ◽  
Mark Dyble ◽  
James Thompson ◽  
Katie Major ◽  
Abigail E. Page ◽  
...  

Humans regularly cooperate with non-kin, which has been theorized to require reciprocity between repeatedly interacting and trusting individuals. However, the role of repeated interactions has not previously been demonstrated in explaining real-world patterns of hunter–gatherer cooperation. Here we explore cooperation among the Agta, a population of Filipino hunter–gatherers, using data from both actual resource transfers and two experimental games across multiple camps. Patterns of cooperation vary greatly between camps and depend on socio-ecological context. Stable camps (with fewer changes in membership over time) were associated with greater reciprocal sharing, indicating that an increased likelihood of future interactions facilitates reciprocity. This is the first study reporting an association between reciprocal cooperation and hunter–gatherer band stability. Under conditions of low camp stability individuals still acquire resources from others, but do so via demand sharing (taking from others), rather than based on reciprocal considerations. Hunter–gatherer cooperation may either be characterized as reciprocity or demand sharing depending on socio-ecological conditions.


2021 ◽  
Vol 39 (28_suppl) ◽  
pp. 253-253
Author(s):  
Maureen Canavan ◽  
Xiaoliang Wang ◽  
Mustafa Ascha ◽  
Rebecca A. Miksad ◽  
Timothy N Showalter ◽  
...  

253 Background: Among patients with cancer, receipt of systemic oncolytic therapy near the end-of-life (EOL) does not improve outcomes and worsens patient and caregiver experience. Accordingly, the ASCO/NQF measure, Proportion Receiving Chemotherapy in the Last 14 Days of Life, was published in 2012. Over the last decade there has been exponential growth in high cost targeted and immune therapies which may be perceived as less toxic than traditional chemotherapy. In this study, we identified rates and types of EOL systemic therapy in today’s real-world practice; these can serve as benchmarks for cancer care organizations to drive improvement efforts. Methods: Using data from the nationwide Flatiron Health electronic health record (EHR)-derived de-identified database we included patients who died during 2015 through 2019, were diagnosed after 2011, and who had documented cancer treatment. We identified the use of aggressive EOL systemic treatment (including, chemotherapy, immunotherapy, and combinations thereof) at both 30 days and 14 days prior to death. We estimated standardized EOL rates using mixed-level logistic regression models adjusting for patient and practice-level factors. Year-specific adjusted rates were estimated in annualized stratified analysis. Results: We included 57,127 patients, 38% of whom had documentation of having received any type of systemic cancer treatment within 30 days of death (SD: 5%; range: 25% - 56%), and 17% within 14 days of death (SD: 3%; range: 10% - 30%). Chemotherapy alone was the most common EOL treatment received (18% at 30 days, 8% at 14 days), followed by immunotherapy (± other treatment) (11% at 30 days, 4% at 14 days). Overall rates of EOL treatment did not change over the study period: treatment within 30 days (39% in 2015 to 37% in 2019) and within 14 days (17% in 2015 to 17% in 2019) of death. However, the rates of chemotherapy alone within 30 days of death decreased from 24% to 14%, and within 14 days, from 10% to 6% during the study period. In comparison, rates for immunotherapy with chemotherapy (0%-6% for 30 days, 0% -2% for 14 days), and immunotherapy alone or with other treatment types (4%-13% for 30 days, 1%-4% for 14 days) increased over time for both 30 and 14 days. Conclusions: End of life systemic cancer treatment rates have not substantively changed over time despite national efforts and expert guidance. While rates of traditional chemotherapy have decreased, rates of costly immunotherapy and targeted therapy have increased, which has been associated with higher total cost of care and overall healthcare utilization. Future work should examine the drivers of end-of-life care in the era of immune-oncology.


Sign in / Sign up

Export Citation Format

Share Document