scholarly journals Attention-mechanism-based tracking method for intelligent Internet of vehicles

2018 ◽  
Vol 14 (10) ◽  
pp. 155014771880594 ◽  
Author(s):  
Xu Kang ◽  
Bin Song ◽  
Jie Guo ◽  
Xiaojiang Du ◽  
Mohsen Guizani

Vehicle tracking task plays an important role on the Internet of vehicles and intelligent transportation system. Beyond the traditional Global Positioning System sensor, the image sensor can capture different kinds of vehicles, analyze their driving situation, and can interact with them. Aiming at the problem that the traditional convolutional neural network is vulnerable to background interference, this article proposes vehicle tracking method based on human attention mechanism for self-selection of deep features with an inter-channel fully connected layer. It mainly includes the following contents: (1) a fully convolutional neural network fused attention mechanism with the selection of the deep features for convolution; (2) a separation method for template and semantic background region to separate target vehicles from the background in the initial frame adaptively; (3) a two-stage method for model training using our traffic dataset. The experimental results show that the proposed method improves the tracking accuracy without an increase in tracking time. Meanwhile, it strengthens the robustness of algorithm under the condition of the complex background region. The success rate of the proposed method in overall traffic datasets is higher than Siamese network by about 10%, and the overall precision is higher than Siamese network by 8%.

2020 ◽  
Vol 2 (95) ◽  
pp. 32-42
Author(s):  
V.V. Bilotserkovskyy ◽  
S.G. Udovenko ◽  
L.E. Chala

Methods for generating images falsified using Deepfake technologies and methods for detecting them are considered. A method for detecting falsified images is proposed, based on the joint use of an ensemble of convolutional neural models, the Attention mechanism and a Siamese network learning strategy. The ensembles of models were formed in different ways (using two, three or more components). The result was calculated as the average value of the AUC and LogLoss indices from all the models included in the ensemble. This approach improves the accuracy of convolutional neural network classifiers for detecting static and dynamic images created using Deepfake technologies.


2021 ◽  
Vol 11 (11) ◽  
pp. 5235
Author(s):  
Nikita Andriyanov

The article is devoted to the study of convolutional neural network inference in the task of image processing under the influence of visual attacks. Attacks of four different types were considered: simple, involving the addition of white Gaussian noise, impulse action on one pixel of an image, and attacks that change brightness values within a rectangular area. MNIST and Kaggle dogs vs. cats datasets were chosen. Recognition characteristics were obtained for the accuracy, depending on the number of images subjected to attacks and the types of attacks used in the training. The study was based on well-known convolutional neural network architectures used in pattern recognition tasks, such as VGG-16 and Inception_v3. The dependencies of the recognition accuracy on the parameters of visual attacks were obtained. Original methods were proposed to prevent visual attacks. Such methods are based on the selection of “incomprehensible” classes for the recognizer, and their subsequent correction based on neural network inference with reduced image sizes. As a result of applying these methods, gains in the accuracy metric by a factor of 1.3 were obtained after iteration by discarding incomprehensible images, and reducing the amount of uncertainty by 4–5% after iteration by applying the integration of the results of image analyses in reduced dimensions.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Cheng-Jian Lin ◽  
Chun-Hui Lin ◽  
Shyh-Hau Wang

Deep learning has accomplished huge success in computer vision applications such as self-driving vehicles, facial recognition, and controlling robots. A growing need for deploying systems on resource-limited or resource-constrained environments such as smart cameras, autonomous vehicles, robots, smartphones, and smart wearable devices drives one of the current mainstream developments of convolutional neural networks: reducing model complexity but maintaining fine accuracy. In this study, the proposed efficient light convolutional neural network (ELNet) comprises three convolutional modules which perform ELNet using fewer computations, which is able to be implemented in resource-constrained hardware equipment. The classification task using CIFAR-10 and CIFAR-100 datasets was used to verify the model performance. According to the experimental results, ELNet reached 92.3% and 69%, respectively, in CIFAR-10 and CIFAR-100 datasets; moreover, ELNet effectively lowered the computational complexity and parameters required in comparison with other CNN architectures.


2019 ◽  
Author(s):  
Nguyen P. Nguyen ◽  
Jacob Gotberg ◽  
Ilker Ersoy ◽  
Filiz Bunyak ◽  
Tommi White

AbstractSelection of individual protein particles in cryo-electron micrographs is an important step in single particle analysis. In this study, we developed a deep learning-based method to automatically detect particle centers from cryoEM micrographs. This is a challenging task because of the low signal-to-noise ratio of cryoEM micrographs and the size, shape, and grayscale-level variations in particles. We propose a double convolutional neural network (CNN) cascade for automated detection of particles in cryo-electron micrographs. Particles are detected by the first network, a fully convolutional regression network (FCRN), which maps the particle image to a continuous distance map that acts like a probability density function of particle centers. Particles identified by FCRN are further refined (or classified) to reduce false particle detections by the second CNN. This approach, entitled Deep Regression Picker Network or “DRPnet”, is simple but very effective in recognizing different grayscale patterns corresponding to 2D views of 3D particles. Our experiments showed that DRPnet’s first CNN pretrained with one dataset can be used to detect particles from a different datasets without retraining. The performance of this network can be further improved by re-training the network using specific particle datasets. The second network, a classification convolutional neural network, is used to refine detection results by identifying false detections. The proposed fully automated “deep regression” system, DRPnet, pretrained with TRPV1 (EMPIAR-10005) [1], and tested on β-galactosidase (EMPIAR-10017) [2] and β-galactosidase (EMPIAR-10061) [3], was then compared to RELION’s interactive particle picking. Preliminary experiments resulted in comparable or better particle picking performance with drastically reduced user interactions and improved processing time.


Sign in / Sign up

Export Citation Format

Share Document