scholarly journals COMPUTER VISION BASED TRAFFIC SIGN SENSING FOR SMART TRANSPORT

2019 ◽  
Vol 1 (01) ◽  
pp. 11-19 ◽  
Author(s):  
James Deva Koresh H

The paper puts forward a real time traffic sign sensing (detection and recognition) frame work for enhancing the vehicles capability in order to have a save driving, path planning. The proposed method utilizes the capsules neural network that outperforms the convolutional neural network by eluding the necessities for the manual effort. The capsules network provides a better resistance for the spatial variance and the high reliability in the sensing of the traffic sign compared to the convolutional network. The evaluation of the capsule network with the Indian traffic data set shows a 15% higher accuracy when compared with the CNN and the RNN.

2020 ◽  
Author(s):  
Sriram Srinivasan ◽  
Shashank A ◽  
vinayakumar R ◽  
Soman KP

In the present era, cyberspace is growing tremendously and the intrusion detection system (IDS) plays a key role in it to ensure information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network-based IDS is to differentiate between normal and malicious traffic data and raise an alert in case of an attack. Apart from the traditional signature and anomaly-based approaches, many researchers have employed various deep learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of deep convolutional neural network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.


2020 ◽  
Vol 12 (6) ◽  
pp. 1015 ◽  
Author(s):  
Kan Zeng ◽  
Yixiao Wang

Classification algorithms for automatically detecting sea surface oil spills from spaceborne Synthetic Aperture Radars (SARs) can usually be regarded as part of a three-step processing framework, which briefly includes image segmentation, feature extraction, and target classification. A Deep Convolutional Neural Network (DCNN), named the Oil Spill Convolutional Network (OSCNet), is proposed in this paper for SAR oil spill detection, which can do the latter two steps of the three-step processing framework. Based on VGG-16, the OSCNet is obtained by designing the architecture and adjusting hyperparameters with the data set of SAR dark patches. With the help of the big data set containing more than 20,000 SAR dark patches and data augmentation, the OSCNet can have as many as 12 weight layers. It is a relatively deep Deep Learning (DL) network for SAR oil spill detection. It is shown by the experiments based on the same data set that the classification performance of OSCNet has been significantly improved compared to that of traditional machine learning (ML). The accuracy, recall, and precision are improved from 92.50%, 81.40%, and 80.95% to 94.01%, 83.51%, and 85.70%, respectively. An important reason for this improvement is that the distinguishability of the features learned by OSCNet itself from the data set is significantly higher than that of the hand-crafted features needed by traditional ML algorithms. In addition, experiments show that data augmentation plays an important role in avoiding over-fitting and hence improves the classification performance. OSCNet has also been compared with other DL classifiers for SAR oil spill detection. Due to the huge differences in the data sets, only their similarities and differences are discussed at the principle level.


2016 ◽  
Vol 214 ◽  
pp. 758-766 ◽  
Author(s):  
Yingying Zhu ◽  
Chengquan Zhang ◽  
Duoyou Zhou ◽  
Xinggang Wang ◽  
Xiang Bai ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xingyu Xie ◽  
Bin Lv

Convolutional Neural Network- (CNN-) based GAN models mainly suffer from problems such as data set limitation and rendering efficiency in the segmentation and rendering of painting art. In order to solve these problems, this paper uses the improved cycle generative adversarial network (CycleGAN) to render the current image style. This method replaces the deep residual network (ResNet) of the original network generator with a dense connected convolutional network (DenseNet) and uses the perceptual loss function for adversarial training. The painting art style rendering system built in this paper is based on perceptual adversarial network (PAN) for the improved CycleGAN that suppresses the limitation of the network model on paired samples. The proposed method also improves the quality of the image generated by the artistic style of painting and further improves the stability and speeds up the network convergence speed. Experiments were conducted on the painting art style rendering system based on the proposed model. Experimental results have shown that the image style rendering method based on the perceptual adversarial error to improve the CycleGAN + PAN model can achieve better results. The PSNR value of the generated image is increased by 6.27% on average, and the SSIM values are all increased by about 10%. Therefore, the improved CycleGAN + PAN image painting art style rendering method produces better painting art style images, which has strong application value.


2020 ◽  
Author(s):  
Sriram Srinivasan ◽  
Shashank A ◽  
vinayakumar R ◽  
Soman KP

In the present era, cyberspace is growing tremendously and the intrusion detection system (IDS) plays a key role in it to ensure information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network-based IDS is to differentiate between normal and malicious traffic data and raise an alert in case of an attack. Apart from the traditional signature and anomaly-based approaches, many researchers have employed various deep learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of deep convolutional neural network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.


Author(s):  
Pranav Kale ◽  
Mayuresh Panchpor ◽  
Saloni Dingore ◽  
Saloni Gaikwad ◽  
Prof. Dr. Laxmi Bewoor

In today's world, deep learning fields are getting boosted with increasing speed. Lot of innovations and different algorithms are being developed. In field of computer vision, related to autonomous driving sector, traffic signs play an important role to provide real time data of an environment. Different algorithms were developed to classify these Signs. But performance still needs to improve for real time environment. Even the computational power required to train such model is high. In this paper, Convolutional Neural Network model is used to Classify Traffic Sign. The experiments are conducted on a real-world data set with images and videos captured from ordinary car driving as well as on GTSRB dataset [15] available on Kaggle. This proposed model is able to outperform previous models and resulted with accuracy of 99.6% on validation set. This idea has been granted Innovation Patent by Australian IP to Authors of this Research Paper. [24]


Sign in / Sign up

Export Citation Format

Share Document