Deep Learning-Based Decision Making for Autonomous Vehicle at Roundabout

Author(s):  
Weichao Wang ◽  
Lei Jiang ◽  
Shiran Lin ◽  
Hui Fang ◽  
Qinggang Meng
2019 ◽  
Vol 33 (3) ◽  
pp. 89-109 ◽  
Author(s):  
Ting (Sophia) Sun

SYNOPSIS This paper aims to promote the application of deep learning to audit procedures by illustrating how the capabilities of deep learning for text understanding, speech recognition, visual recognition, and structured data analysis fit into the audit environment. Based on these four capabilities, deep learning serves two major functions in supporting audit decision making: information identification and judgment support. The paper proposes a framework for applying these two deep learning functions to a variety of audit procedures in different audit phases. An audit data warehouse of historical data can be used to construct prediction models, providing suggested actions for various audit procedures. The data warehouse will be updated and enriched with new data instances through the application of deep learning and a human auditor's corrections. Finally, the paper discusses the challenges faced by the accounting profession, regulators, and educators when it comes to applying deep learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Steven A. Hicks ◽  
Jonas L. Isaksen ◽  
Vajira Thambawita ◽  
Jonas Ghouse ◽  
Gustav Ahlberg ◽  
...  

AbstractDeep learning-based tools may annotate and interpret medical data more quickly, consistently, and accurately than medical doctors. However, as medical doctors are ultimately responsible for clinical decision-making, any deep learning-based prediction should be accompanied by an explanation that a human can understand. We present an approach called electrocardiogram gradient class activation map (ECGradCAM), which is used to generate attention maps and explain the reasoning behind deep learning-based decision-making in ECG analysis. Attention maps may be used in the clinic to aid diagnosis, discover new medical knowledge, and identify novel features and characteristics of medical tests. In this paper, we showcase how ECGradCAM attention maps can unmask how a novel deep learning model measures both amplitudes and intervals in 12-lead electrocardiograms, and we show an example of how attention maps may be used to develop novel ECG features.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
Vol 10 (3) ◽  
pp. 42
Author(s):  
Mohammed Al-Nuaimi ◽  
Sapto Wibowo ◽  
Hongyang Qu ◽  
Jonathan Aitken ◽  
Sandor Veres

The evolution of driving technology has recently progressed from active safety features and ADAS systems to fully sensor-guided autonomous driving. Bringing such a vehicle to market requires not only simulation and testing but formal verification to account for all possible traffic scenarios. A new verification approach, which combines the use of two well-known model checkers: model checker for multi-agent systems (MCMAS) and probabilistic model checker (PRISM), is presented for this purpose. The overall structure of our autonomous vehicle (AV) system consists of: (1) A perception system of sensors that feeds data into (2) a rational agent (RA) based on a belief–desire–intention (BDI) architecture, which uses a model of the environment and is connected to the RA for verification of decision-making, and (3) a feedback control systems for following a self-planned path. MCMAS is used to check the consistency and stability of the BDI agent logic during design-time. PRISM is used to provide the RA with the probability of success while it decides to take action during run-time operation. This allows the RA to select movements of the highest probability of success from several generated alternatives. This framework has been tested on a new AV software platform built using the robot operating system (ROS) and virtual reality (VR) Gazebo Simulator. It also includes a parking lot scenario to test the feasibility of this approach in a realistic environment. A practical implementation of the AV system was also carried out on the experimental testbed.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1052
Author(s):  
Leang Sim Nguon ◽  
Kangwon Seo ◽  
Jung-Hyun Lim ◽  
Tae-Jun Song ◽  
Sung-Hyun Cho ◽  
...  

Mucinous cystic neoplasms (MCN) and serous cystic neoplasms (SCN) account for a large portion of solitary pancreatic cystic neoplasms (PCN). In this study we implemented a convolutional neural network (CNN) model using ResNet50 to differentiate between MCN and SCN. The training data were collected retrospectively from 59 MCN and 49 SCN patients from two different hospitals. Data augmentation was used to enhance the size and quality of training datasets. Fine-tuning training approaches were utilized by adopting the pre-trained model from transfer learning while training selected layers. Testing of the network was conducted by varying the endoscopic ultrasonography (EUS) image sizes and positions to evaluate the network performance for differentiation. The proposed network model achieved up to 82.75% accuracy and a 0.88 (95% CI: 0.817–0.930) area under curve (AUC) score. The performance of the implemented deep learning networks in decision-making using only EUS images is comparable to that of traditional manual decision-making using EUS images along with supporting clinical information. Gradient-weighted class activation mapping (Grad-CAM) confirmed that the network model learned the features from the cyst region accurately. This study proves the feasibility of diagnosing MCN and SCN using a deep learning network model. Further improvement using more datasets is needed.


2021 ◽  
Vol 54 (4) ◽  
pp. 1-37
Author(s):  
Azzedine Boukerche ◽  
Xiren Ma

Vision-based Automated Vehicle Recognition (VAVR) has attracted considerable attention recently. Particularly given the reliance on emerging deep learning methods, which have powerful feature extraction and pattern learning abilities, vehicle recognition has made significant progress. VAVR is an essential part of Intelligent Transportation Systems. The VAVR system can fast and accurately locate a target vehicle, which significantly helps improve regional security. A comprehensive VAVR system contains three components: Vehicle Detection (VD), Vehicle Make and Model Recognition (VMMR), and Vehicle Re-identification (VRe-ID). These components perform coarse-to-fine recognition tasks in three steps. In this article, we conduct a thorough review and comparison of the state-of-the-art deep learning--based models proposed for VAVR. We present a detailed introduction to different vehicle recognition datasets used for a comprehensive evaluation of the proposed models. We also critically discuss the major challenges and future research trends involved in each task. Finally, we summarize the characteristics of the methods for each task. Our comprehensive model analysis will help researchers that are interested in VD, VMMR, and VRe-ID and provide them with possible directions to solve current challenges and further improve the performance and robustness of models.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1523
Author(s):  
Nikita Smirnov ◽  
Yuzhou Liu ◽  
Aso Validi ◽  
Walter Morales-Alvarez ◽  
Cristina Olaverri-Monreal

Autonomous vehicles are expected to display human-like behavior, at least to the extent that their decisions can be intuitively understood by other road users. If this is not the case, the coexistence of manual and autonomous vehicles in a mixed environment might affect road user interactions negatively and might jeopardize road safety. To this end, it is highly important to design algorithms that are capable of analyzing human decision-making processes and of reproducing them. In this context, lane-change maneuvers have been studied extensively. However, not all potential scenarios have been considered, since most works have focused on highway rather than urban scenarios. We contribute to the field of research by investigating a particular urban traffic scenario in which an autonomous vehicle needs to determine the level of cooperation of the vehicles in the adjacent lane in order to proceed with a lane change. To this end, we present a game theory-based decision-making model for lane changing in congested urban intersections. The model takes as input driving-related parameters related to vehicles in the intersection before they come to a complete stop. We validated the model by relying on the Co-AutoSim simulator. We compared the prediction model outcomes with actual participant decisions, i.e., whether they allowed the autonomous vehicle to drive in front of them. The results are promising, with the prediction accuracy being 100% in all of the cases in which the participants allowed the lane change and 83.3% in the other cases. The false predictions were due to delays in resuming driving after the traffic light turned green.


Sign in / Sign up

Export Citation Format

Share Document