Convolutional neural networks based image resampling with noisy training set

Author(s):  
Andrey Nasonov ◽  
Konstantin Chesnakov ◽  
Andrey Krylov
Author(s):  
Mateus Eloi da Silva Bastos ◽  
Vitor Yeso Fidelis Freitas ◽  
Richardson Santiago Teles De Menezes ◽  
Helton Maia

In this study, the computational development conducted was based on Convolutional Neural Networks (CNNs), and the You Only Look Once (YOLO) algorithm to detect vehicles from aerial images and calculate the safe distance between them. We analyzed a dataset composed of 896 images, recorded in videos by a DJI Spark Drone. The training set used 60% of the images, 20% for validation, and 20% for the tests. Tests were performed to detect vehicles in different configurations, and the best result was achieved using the YOLO Full-608, with a mean Average Precision(mAP) of 95.6%. The accuracy of the results encourages the development of systems capable of estimating the safe distance between vehicles in motion, allowing mainly to minimize the risk of accidents.


2020 ◽  
Author(s):  
Richardson Santiago Teles De Menezes ◽  
John Victor Alves Luiz ◽  
Aron Miranda Henrique-Alves ◽  
Rossana Moreno Santa Cruz ◽  
Helton Maia

The computational tool developed in this study is based on convolutional neural networks and the You Only Look Once (YOLO) algorithm for detecting and tracking mice in videos recorded during behavioral neuroscience experiments. We analyzed a set of data composed of 13622 images, made up of behavioral videos of three important researches in this area. The training set used 50% of the images, 25% for validation, and 25% for the tests. The results show that the mean Average Precision (mAP) reached by the developed system was 90.79% and 90.75% for the Full and Tiny versions of YOLO, respectively. Considering the high accuracy of the results, the developed work allows the experimentalists to perform mice tracking in a reliable and non-evasive way.


Healthcare ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 1050
Author(s):  
Jesús Tomás ◽  
Albert Rego ◽  
Sandra Viciano-Tudela ◽  
Jaime Lloret

The COVID-19 pandemic has been a worldwide catastrophe. Its impact, not only economically, but also socially and in terms of human lives, was unexpected. Each of the many mechanisms to fight the contagiousness of the illness has been proven to be extremely important. One of the most important mechanisms is the use of facemasks. However, the wearing the facemasks incorrectly makes this prevention method useless. Artificial Intelligence (AI) and especially facial recognition techniques can be used to detect misuses and reduce virus transmission, especially indoors. In this paper, we present an intelligent method to automatically detect when facemasks are being worn incorrectly in real-time scenarios. Our proposal uses Convolutional Neural Networks (CNN) with transfer learning to detect not only if a mask is used or not, but also other errors that are usually not taken into account but that may contribute to the virus spreading. The main problem that we have detected is that there is currently no training set for this task. It is for this reason that we have requested the participation of citizens by taking different selfies through an app and placing the mask in different positions. Thus, we have been able to solve this problem. The results show that the accuracy achieved with transfer learning slightly improves the accuracy achieved with convolutional neural networks. Finally, we have also developed an Android-app demo that validates the proposal in real scenarios.


Author(s):  
Liang Yang ◽  
Zhiyang Chen ◽  
Junhua Gu ◽  
Yuanfang Guo

The success of graph convolutional neural networks (GCNNs) based semi-supervised node classification is credited to the attribute smoothing (propagating) over the topology. However, the attributes may be interfered by the utilization of the topology information. This distortion will induce a certain amount of misclassifications of the nodes, which can be correctly predicted with only the attributes. By analyzing the impact of the edges in attribute propagations, the simple edges, which connect two nodes with similar attributes, should be given priority during the training process compared to the complex ones according to curriculum learning. To reduce the distortions induced by the topology while exploit more potentials of the attribute information, Dual Self-Paced Graph Convolutional Network (DSP-GCN) is proposed in this paper. Specifically, the unlabelled nodes with confidently predicted labels are gradually added into the training set in the node-level self-paced learning, while edges are gradually, from the simple edges to the complex ones, added into the graph during the training process in the edge-level self-paced learning. These two learning strategies are designed to mutually reinforce each other by coupling the selections of the edges and unlabelled nodes. Experimental results of transductive semi-supervised node classification on many real networks indicate that the proposed DSP-GCN has successfully reduced the attribute distortions induced by the topology while it gives superior performances with only one graph convolutional layer.


2021 ◽  
Author(s):  
Gregory Rutkowski ◽  
Ilgar Azizov ◽  
Evan Unmann ◽  
Marcin Dudek ◽  
Brian Arthur Grimes

As the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that various convolutional neural networks can be trained and used as droplet detectors in a wide variety of microfluidic systems. A generalized microfluidic droplet training and validation dataset was developed and used to tune two versions of the You Only Look Once (YOLOv3/YOLOv5) model as well as Faster R-CNN. Each model was used to detect droplets in mono- and polydisperse flow cell systems. The detection accuracy of each model shows excellent statistical symmetry with an implementation of the Hough transform as well as relevant ImageJ plugins. The models were successfully used as droplet detectors in non-microfluidic micrograph observations, where these data were not included in the training set. The models outperformed the traditional methods in more complex, porous-media simulating chip architectures with a significant speedup to per-frame analysis times. Implementing these neural networks as the primary detectors in these microfluidic systems not only makes the data pipelining more efficient, but opens the door for live detection and development of autonomous microfluidic experimental platforms. <br>


2020 ◽  
Vol 497 (1) ◽  
pp. 556-571
Author(s):  
Zizhao He ◽  
Xinzhong Er ◽  
Qian Long ◽  
Dezi Liu ◽  
Xiangkun Liu ◽  
...  

ABSTRACT Convolutional neural networks have been successfully applied in searching for strong lensing systems, leading to discoveries of new candidates from large surveys. On the other hand, systematic investigations about their robustness are still lacking. In this paper, we first construct a neutral network, and apply it to r-band images of luminous red galaxies (LRGs) of the Kilo Degree Survey (KiDS) Data Release 3 to search for strong lensing systems. We build two sets of training samples, one fully from simulations, and the other one using the LRG stamps from KiDS observations as the foreground lens images. With the former training sample, we find 48 high probability candidates after human inspection, and among them, 27 are newly identified. Using the latter training set, about 67 per cent of the aforementioned 48 candidates are also found, and there are 11 more new strong lensing candidates identified. We then carry out tests on the robustness of the network performance with respect to the variation of PSF. With the testing samples constructed using PSF in the range of 0.4–2 times of the median PSF of the training sample, we find that our network performs rather stable, and the degradation is small. We also investigate how the volume of the training set can affect our network performance by varying it from 0.1 to 0.8 million. The output results are rather stable showing that within the considered range, our network performance is not very sensitive to the volume size.


2021 ◽  
Author(s):  
Gregory Rutkowski ◽  
Ilgar Azizov ◽  
Evan Unmann ◽  
Marcin Dudek ◽  
Brian Arthur Grimes

As the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that various convolutional neural networks can be trained and used as droplet detectors in a wide variety of microfluidic systems. A generalized microfluidic droplet training and validation dataset was developed and used to tune two versions of the You Only Look Once (YOLOv3/YOLOv5) model as well as Faster R-CNN. Each model was used to detect droplets in mono- and polydisperse flow cell systems. The detection accuracy of each model shows excellent statistical symmetry with an implementation of the Hough transform as well as relevant ImageJ plugins. The models were successfully used as droplet detectors in non-microfluidic micrograph observations, where these data were not included in the training set. The models outperformed the traditional methods in more complex, porous-media simulating chip architectures with a significant speedup to per-frame analysis times. Implementing these neural networks as the primary detectors in these microfluidic systems not only makes the data pipelining more efficient, but opens the door for live detection and development of autonomous microfluidic experimental platforms. <br>


2021 ◽  
Author(s):  
Gentian Gashi

Handwriting recognition is the process of automatically converting handwritten text into electronic text (letter codes) usable by a computer. The increase in technology reliance during an international pandemic caused by COVID-19 has showcased the importance of ensuring the information stored and digitised is done accurately and efficiently. Interpreting handwriting remains complex for both humans and computers due to the various styles and skewed characters. In this study, we conducted a correlational analysis on the association between filter sizes and the convolutional neural networks (CNN’s) classification accuracy. The testing has been conducted from the publicly available MNIST database of handwritten digits (LeCun and Cortes, 2010). The dataset consists of a training set (N=60,000) and a testing set (N=10,000). Using ANOVA, our results indicate a strong correlation (.000,P≤0.05) between filter size and classification accuracy. However, this significance is only present when increasing the filter size from 1x1 to 2x2. Larger filter sizes were insignificant therefore, a filter size above 2x2 cannot be recommended.


Sign in / Sign up

Export Citation Format

Share Document