Development of a Novel Convolutional Neural Network Architecture Named RoadweatherNet for Trajectory-Level Weather Detection using SHRP2 Naturalistic Driving Data

Author(s):  
Md Nasim Khan ◽  
Mohamed M. Ahmed

Driver performances could be significantly impaired in adverse weather because of poor visibility and slippery roadways. Therefore, providing drivers with accurate weather information in real time is vital for safe driving. The state-of-practice of collecting roadway weather information is based on weather stations, which are expensive and cannot provide trajectory-level weather information. Therefore, the primary objective of this study was to develop an affordable detection system capable of providing trajectory-level weather information at the road surface level in real-time. This study utilized the Strategic Highway Research Program 2 Naturalistic Driving Study video data combined with a promising machine learning technique, called convolutional neural network (CNN), to develop a weather detection model with seven weather categories: clear, light rain, heavy rain, light snow, heavy snow, distant fog, and near fog. A novel CNN architecture, named RoadweatherNet, was carefully crafted to achieve the weather detection task. The evaluation results based on a test dataset revealed that RoadweatherNet can provide excellent performance in detecting weather conditions with an overall accuracy of 93%. The performance of RoadweatherNet was also compared with six pre-trained CNN models, namely, AlexNet, ResNet18, ResNet50, GoogLeNet, ShuffleNet, and SqueezeNet, which showed that RoadweatherNet can provide nearly identical performance with a significant reduction in training time. The proposed weather detection model is cost-efficient and requires less computational power; therefore, it can be made widely available mainly owing to the recent thriving of smartphone cameras and can be used to expand and update the current weather-based variable speed limit systems.

Author(s):  
Saurabh Takle ◽  
Shubham Desai ◽  
Sahil Mirgal ◽  
Ichhanshu Jaiswal

<p>The main cause of accidents is due to Manual, Visual or Cognitive distraction out of these three Manual distractions are concerned with various activities where “driver’s hands are off the wheel”. Such distractions include talking or texting using mobile phones, eating and drinking, talking to passengers in the vehicle, adjusting the radio, makeup, etc. To solve the problem of manual distraction, the Convolutional Neural Network (CNN) model of ResNet-50 using transfer learning with 23,587,712 parameters was used. The dataset used is from State Farm Distracted Driver Detection Dataset. The training accuracy is 97.27% and validation accuracy is 55%. Further the model works on detecting real-time distractions on a video feed for this purpose the system uses OpenCV and the model is integrated with the frontend using the flask.</p>


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11 ◽  
Author(s):  
Zhijian Huang ◽  
Bowen Sui ◽  
Jiayi Wen ◽  
Guohe Jiang

The shipping industry is developing towards intelligence rapidly. An accurate and fast method for ship image/video detection and classification is of great significance for not only the port management, but also the safe driving of Unmanned Surface Vehicle (USV). Thus, this paper makes a self-built dataset for the ship image/video detection and classification, and its method based on an improved regressive deep convolutional neural network is presented. This method promotes the regressive convolutional neural network from four aspects. First, the feature extraction layer is lightweighted by referring to YOLOv2. Second, a new feature pyramid network layer is designed by improving its structure in YOLOv3. Third, a proper frame and scale suitable for ships are designed with a clustering algorithm to reduced 60% anchors. Last, the activation function is verified and optimized. Then, the detecting experiment on 7 types of ships shows that the proposed method has advantage compared with the YOLO series networks and other intelligent methods. This method can solve the problem of low recognition rate and real-time performance for ship image/video detection and classification with a small dataset. On the testing-set, the final mAP is 0.9209, the Recall is 0.9818, the AIOU is 0.7991, and the FPS is 78–80 in video detection. Thus, this method provides a highly accurate and real-time ship detection method for the intelligent port management and visual processing of the USV. In addition, the proposed regressive deep convolutional network also has a better comprehensive performance than that of YOLOv2/v3.


2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document