scholarly journals Plant Stress Detection Accuracy Using Deep Convolution Neural Networks

Author(s):  
Chege Kirongo ◽  
Kelvin Omieno ◽  
Makau Mutua ◽  
Vitalis Ogemah

Plant Stress detection is a vital farming activity for enhanced productivity of crops and food security. Convolution Neural Networks (CNN) focuses on the complex relationships on input and output layers of neural networks for prediction. This task further helps in detecting the behavior of crops in response to biotic and abiotic stressors in reducing food losses. The enhancement of crop productivity for food security depends on accurate stress detection. This paper proposes and investigates the application of deep neural network to the tomato pests and disease stress detection. The images captured over a period of six months are treated as historical dataset to train and detect the plant stresses. The network structure is implemented using Google’s machine learning Tensor-flow platform. A number of activation functions were tested to achieve a better accuracy. The Rectifier linear unit (ReLU) function was tested. The preliminary results show increased accuracy over other activation functions.

Author(s):  
Lili Pei ◽  
Li Shi ◽  
Zhaoyun Sun ◽  
Wei Li ◽  
Yao Gao ◽  
...  

Pavement potholes have low detection accuracy under the condition of small samples. To address this issue, we propose a method for efficient and accurate pothole detection under small-sample conditions, based on improved Faster R-CNN (Region-based Convolution Neural Networks). First, images consisting of different pothole shapes and sizes are acquired from different sources and then, augmented and denoised to obtain the image set. Second, two representative target detection models, Faster R-CNN and YOLOv3, are tested. The detection results indicate that Faster R-CNN achieves better detection performance. Furthermore, to overcome inconsistencies (missed detections and inaccurate position estimations), the feature extraction layers of VGG16, ZFNet, and ResNet50 networks are used in combination with Faster R-CNN. The results show that the VGG16+Faster R-CNN fusion model yields superior accuracy. Finally, the detection accuracy improved to 0.8997 after adjusting the size of the candidate frame, which also enabled the successful detection of previously missed targets.


Metrologiya ◽  
2020 ◽  
pp. 15-37
Author(s):  
L. P. Bass ◽  
Yu. A. Plastinin ◽  
I. Yu. Skryabysheva

Use of the technical (computer) vision systems for Earth remote sensing is considered. An overview of software and hardware used in computer vision systems for processing satellite images is submitted. Algorithmic methods of the data processing with use of the trained neural network are described. Examples of the algorithmic processing of satellite images by means of artificial convolution neural networks are given. Ways of accuracy increase of satellite images recognition are defined. Practical applications of convolution neural networks onboard microsatellites for Earth remote sensing are presented.


2017 ◽  
Vol 6 (4) ◽  
pp. 15
Author(s):  
JANARDHAN CHIDADALA ◽  
RAMANAIAH K.V. ◽  
BABULU K ◽  
◽  
◽  
...  

2019 ◽  
Author(s):  
Rajashekar A ◽  
Shruti Hegdekar ◽  
Dikpal Shrestha ◽  
Prabin Nepal ◽  
Sujanb Neupane

2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


Sign in / Sign up

Export Citation Format

Share Document