scholarly journals Impact of Training Set Batch Size on the Performance of Convolutional Neural Networks for Diverse Datasets

Author(s):  
Pavlo M. Radiuk
Author(s):  
Mateus Eloi da Silva Bastos ◽  
Vitor Yeso Fidelis Freitas ◽  
Richardson Santiago Teles De Menezes ◽  
Helton Maia

In this study, the computational development conducted was based on Convolutional Neural Networks (CNNs), and the You Only Look Once (YOLO) algorithm to detect vehicles from aerial images and calculate the safe distance between them. We analyzed a dataset composed of 896 images, recorded in videos by a DJI Spark Drone. The training set used 60% of the images, 20% for validation, and 20% for the tests. Tests were performed to detect vehicles in different configurations, and the best result was achieved using the YOLO Full-608, with a mean Average Precision(mAP) of 95.6%. The accuracy of the results encourages the development of systems capable of estimating the safe distance between vehicles in motion, allowing mainly to minimize the risk of accidents.


2020 ◽  
Author(s):  
Richardson Santiago Teles De Menezes ◽  
John Victor Alves Luiz ◽  
Aron Miranda Henrique-Alves ◽  
Rossana Moreno Santa Cruz ◽  
Helton Maia

The computational tool developed in this study is based on convolutional neural networks and the You Only Look Once (YOLO) algorithm for detecting and tracking mice in videos recorded during behavioral neuroscience experiments. We analyzed a set of data composed of 13622 images, made up of behavioral videos of three important researches in this area. The training set used 50% of the images, 25% for validation, and 25% for the tests. The results show that the mean Average Precision (mAP) reached by the developed system was 90.79% and 90.75% for the Full and Tiny versions of YOLO, respectively. Considering the high accuracy of the results, the developed work allows the experimentalists to perform mice tracking in a reliable and non-evasive way.


Healthcare ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 1050
Author(s):  
Jesús Tomás ◽  
Albert Rego ◽  
Sandra Viciano-Tudela ◽  
Jaime Lloret

The COVID-19 pandemic has been a worldwide catastrophe. Its impact, not only economically, but also socially and in terms of human lives, was unexpected. Each of the many mechanisms to fight the contagiousness of the illness has been proven to be extremely important. One of the most important mechanisms is the use of facemasks. However, the wearing the facemasks incorrectly makes this prevention method useless. Artificial Intelligence (AI) and especially facial recognition techniques can be used to detect misuses and reduce virus transmission, especially indoors. In this paper, we present an intelligent method to automatically detect when facemasks are being worn incorrectly in real-time scenarios. Our proposal uses Convolutional Neural Networks (CNN) with transfer learning to detect not only if a mask is used or not, but also other errors that are usually not taken into account but that may contribute to the virus spreading. The main problem that we have detected is that there is currently no training set for this task. It is for this reason that we have requested the participation of citizens by taking different selfies through an app and placing the mask in different positions. Thus, we have been able to solve this problem. The results show that the accuracy achieved with transfer learning slightly improves the accuracy achieved with convolutional neural networks. Finally, we have also developed an Android-app demo that validates the proposal in real scenarios.


Author(s):  
Liang Yang ◽  
Zhiyang Chen ◽  
Junhua Gu ◽  
Yuanfang Guo

The success of graph convolutional neural networks (GCNNs) based semi-supervised node classification is credited to the attribute smoothing (propagating) over the topology. However, the attributes may be interfered by the utilization of the topology information. This distortion will induce a certain amount of misclassifications of the nodes, which can be correctly predicted with only the attributes. By analyzing the impact of the edges in attribute propagations, the simple edges, which connect two nodes with similar attributes, should be given priority during the training process compared to the complex ones according to curriculum learning. To reduce the distortions induced by the topology while exploit more potentials of the attribute information, Dual Self-Paced Graph Convolutional Network (DSP-GCN) is proposed in this paper. Specifically, the unlabelled nodes with confidently predicted labels are gradually added into the training set in the node-level self-paced learning, while edges are gradually, from the simple edges to the complex ones, added into the graph during the training process in the edge-level self-paced learning. These two learning strategies are designed to mutually reinforce each other by coupling the selections of the edges and unlabelled nodes. Experimental results of transductive semi-supervised node classification on many real networks indicate that the proposed DSP-GCN has successfully reduced the attribute distortions induced by the topology while it gives superior performances with only one graph convolutional layer.


2021 ◽  
Author(s):  
Gregory Rutkowski ◽  
Ilgar Azizov ◽  
Evan Unmann ◽  
Marcin Dudek ◽  
Brian Arthur Grimes

As the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that various convolutional neural networks can be trained and used as droplet detectors in a wide variety of microfluidic systems. A generalized microfluidic droplet training and validation dataset was developed and used to tune two versions of the You Only Look Once (YOLOv3/YOLOv5) model as well as Faster R-CNN. Each model was used to detect droplets in mono- and polydisperse flow cell systems. The detection accuracy of each model shows excellent statistical symmetry with an implementation of the Hough transform as well as relevant ImageJ plugins. The models were successfully used as droplet detectors in non-microfluidic micrograph observations, where these data were not included in the training set. The models outperformed the traditional methods in more complex, porous-media simulating chip architectures with a significant speedup to per-frame analysis times. Implementing these neural networks as the primary detectors in these microfluidic systems not only makes the data pipelining more efficient, but opens the door for live detection and development of autonomous microfluidic experimental platforms. <br>


2020 ◽  
Vol 7 (6) ◽  
pp. 1089
Author(s):  
Iwan Muhammad Erwin ◽  
Risnandar Risnandar ◽  
Esa Prakarsa ◽  
Bambang Sugiarto

<p class="Abstrak">Identifikasi kayu salah satu kebutuhan untuk mendukung pemerintah dan kalangan bisnis kayu untuk melakukan perdagangan kayu secara legal. Keahlian khusus dan waktu yang cukup dibutuhkan untuk memproses identifikasi kayu di laboratorium. Beberapa metodologi penelitian sebelumnya, proses identifikasi kayu masih dengan cara menggabungkan sistem manual menggunakan anatomi DNA kayu. Sedangkan penggunaan sistem komputer diperoleh dari citra penampamg melintang kayu secara proses mikrokopis dan makroskopis. Saat ini, telah berkembang teknologi computer vision dan machine learning untuk mengidentifikasi berbagai jenis objek, salah satunya citra kayu. Penelitian ini berkontribusi dalam mengklasifikasi beberapa spesies kayu yang diperdagangkan menggunakan Deep Convolutional Neural Networks (DCNN). Kebaruan penelitian ini terletak pada arsitektur DCNN yang bernama Kayu7Net. Arsitektur Kayu7Net yang diusulkan memiliki tiga lapisan konvolusi terhadap tujuh spesies dataset citra kayu. Pengujian dengan merubah citra input menjadi berukuran 600×600, 300×300, dan 128×128 piksel serta masing-masing diulang pada epoch 50 dan 100. DCNN yang diusulkan menggunakan fungsi aktivasi ReLU dengan batch size 32. ReLU bersifat lebih konvergen dan cepat saat proses iterasi. Sedangkan Fully-Connected (FC) berjumlah 4 lapisan akan menghasilkan proses training yang lebih efisien. Hasil eksperimen memperlihatkan bahwa Kayu7Net yang diusulkan memiliki nilai akurasi sebesar 95,54%, precision sebesar 95,99%, recall sebesar 95,54%, specificity sebesar 99,26% dan terakhir, nilai F-measure sebesar 95,46%. Hasil ini menunjukkan bahwa arsitektur Kayu7Net lebih unggul sebesar 1,49% pada akurasi, 2,49% pada precision, dan 5,26% pada specificity dibandingkan penelitian sebelumnya.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstrak"><em>Wood identification is one of the needs to support the government and the wood business community for a legally wood trading system. Special expertise and sufficient time are needed to process wood identification in the laboratory. Some previous research works show that the process of identifying wood combines a manual system using a wood DNA anatomy. While, the use of a computer system is obtained from the wood image of microscopic and macroscopic process. Recently, the latest technology has developed by using the machine learning and computer vision to identify many objects, the one of them is wood image. This research contributes to classify several the traded wood species by using Deep Convolutional Neural Networks (DCNN). The novelty of this research is in the DCNN architecture, namely Kayu7Net. The proposed of Kayu7Net Architecture has three convolution layers of the seven species wood image dataset. The testing changes the wood image input to 600×600, 300×300, and 128×128 pixel, respectively, and each of them repeated until 50 and 100 epoches, respectively. The proposed DCNN uses the ReLU activation function and batch size 32. The ReLU is more convergent and faster during the iteration process. Whereas, the 4 layers of Fully-Connected (FC) will produce a more efficient training process. The experimental results show that the proposed Kayu7Net has an accuracy value of 95.54%, a precision of 95.99%, a recall of 95.54%, a specificity of 99.26% and finally, an F-measure value of 95.46%. These results indicate that Kayu7Net is superior by 1.49% of accuracy, 2.49% of precision, and 5.26% of specificity compared to the previous work. </em></p><p class="Abstrak"> </p>


2020 ◽  
Vol 497 (1) ◽  
pp. 556-571
Author(s):  
Zizhao He ◽  
Xinzhong Er ◽  
Qian Long ◽  
Dezi Liu ◽  
Xiangkun Liu ◽  
...  

ABSTRACT Convolutional neural networks have been successfully applied in searching for strong lensing systems, leading to discoveries of new candidates from large surveys. On the other hand, systematic investigations about their robustness are still lacking. In this paper, we first construct a neutral network, and apply it to r-band images of luminous red galaxies (LRGs) of the Kilo Degree Survey (KiDS) Data Release 3 to search for strong lensing systems. We build two sets of training samples, one fully from simulations, and the other one using the LRG stamps from KiDS observations as the foreground lens images. With the former training sample, we find 48 high probability candidates after human inspection, and among them, 27 are newly identified. Using the latter training set, about 67 per cent of the aforementioned 48 candidates are also found, and there are 11 more new strong lensing candidates identified. We then carry out tests on the robustness of the network performance with respect to the variation of PSF. With the testing samples constructed using PSF in the range of 0.4–2 times of the median PSF of the training sample, we find that our network performs rather stable, and the degradation is small. We also investigate how the volume of the training set can affect our network performance by varying it from 0.1 to 0.8 million. The output results are rather stable showing that within the considered range, our network performance is not very sensitive to the volume size.


Sign in / Sign up

Export Citation Format

Share Document