scholarly journals A Review of Plant Phenotypic Image Recognition Technology Based on Deep Learning

Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 81
Author(s):  
Jianbin Xiong ◽  
Dezheng Yu ◽  
Shuangyin Liu ◽  
Lei Shu ◽  
Xiaochan Wang ◽  
...  

Plant phenotypic image recognition (PPIR) is an important branch of smart agriculture. In recent years, deep learning has achieved significant breakthroughs in image recognition. Consequently, PPIR technology that is based on deep learning is becoming increasingly popular. First, this paper introduces the development and application of PPIR technology, followed by its classification and analysis. Second, it presents the theory of four types of deep learning methods and their applications in PPIR. These methods include the convolutional neural network, deep belief network, recurrent neural network, and stacked autoencoder, and they are applied to identify plant species, diagnose plant diseases, etc. Finally, the difficulties and challenges of deep learning in PPIR are discussed.

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2020 ◽  
Vol 37 (9) ◽  
pp. 1661-1668
Author(s):  
Min Wang ◽  
Shudao Zhou ◽  
Zhong Yang ◽  
Zhanhua Liu

AbstractConventional classification methods are based on artificial experience to extract features, and each link is independent, which is a kind of “shallow learning.” As a result, the scope of the cloud category applied by this method is limited. In this paper, we propose a new convolutional neural network (CNN) with deep learning ability, called CloudA, for the ground-based cloud image recognition method. We use the Singapore Whole-Sky Imaging Categories (SWIMCAT) sample library and total-sky sample library to train and test CloudA. In particular, we visualize the cloud features captured by CloudA using the TensorBoard visualization method, and these features can help us to understand the process of ground-based cloud classification. We compare this method with other commonly used methods to explore the feasibility of using CloudA to classify ground-based cloud images, and the evaluation of a large number of experiments show that the average accuracy of this method is nearly 98.63% for ground-based cloud classification.


Forecasting ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 1-25
Author(s):  
Thabang Mathonsi ◽  
Terence L. van Zyl

Hybrid methods have been shown to outperform pure statistical and pure deep learning methods at forecasting tasks and quantifying the associated uncertainty with those forecasts (prediction intervals). One example is Exponential Smoothing Recurrent Neural Network (ES-RNN), a hybrid between a statistical forecasting model and a recurrent neural network variant. ES-RNN achieves a 9.4% improvement in absolute error in the Makridakis-4 Forecasting Competition. This improvement and similar outperformance from other hybrid models have primarily been demonstrated only on univariate datasets. Difficulties with applying hybrid forecast methods to multivariate data include (i) the high computational cost involved in hyperparameter tuning for models that are not parsimonious, (ii) challenges associated with auto-correlation inherent in the data, as well as (iii) complex dependency (cross-correlation) between the covariates that may be hard to capture. This paper presents Multivariate Exponential Smoothing Long Short Term Memory (MES-LSTM), a generalized multivariate extension to ES-RNN, that overcomes these challenges. MES-LSTM utilizes a vectorized implementation. We test MES-LSTM on several aggregated coronavirus disease of 2019 (COVID-19) morbidity datasets and find our hybrid approach shows consistent, significant improvement over pure statistical and deep learning methods at forecast accuracy and prediction interval construction.


2021 ◽  
Vol 290 ◽  
pp. 02020
Author(s):  
Boyu Zhang ◽  
Xiao Wang ◽  
Shudong Li ◽  
Jinghua Yang

Current underwater shipwreck side scan sonar samples are few and difficult to label. With small sample sizes, their image recognition accuracy with a convolutional neural network model is low. In this study, we proposed an image recognition method for shipwreck side scan sonar that combines transfer learning with deep learning. In the non-transfer learning, shipwreck sonar sample data were used to train the network, and the results were saved as the control group. The weakly correlated data were applied to train the network, then the network parameters were transferred to the new network, and then the shipwreck sonar data was used for training. These steps were repeated using strongly correlated data. Experiments were carried out on Lenet-5, AlexNet, GoogLeNet, ResNet and VGG networks. Without transfer learning, the highest accuracy was obtained on the ResNet network (86.27%). Using weakly correlated data for transfer training, the highest accuracy was on the VGG network (92.16%). Using strongly correlated data for transfer training, the highest accuracy was also on the VGG network (98.04%). In all network architectures, transfer learning improved the correct recognition rate of convolutional neural network models. Experiments show that transfer learning combined with deep learning improves the accuracy and generalization of the convolutional neural network in the case of small sample sizes.


2015 ◽  
Vol 2015 (3) ◽  
pp. 117-126
Author(s):  
Дмитрий Будыльский ◽  
Dmitriy Budylskiy ◽  
Александр Подвесовский ◽  
Aleksandr Podvesovskiy

This paper describes actual problem of sentiment based aspect analysis and four deep learning models: convolutional neural network, recurrent neural network, GRU and LSTM networks. We evaluated these models on Russian text dataset from SentiRuEval-2015. Results show good efficiency and high potential for further natural language processing applications.


2018 ◽  
Vol 10 (12) ◽  
pp. 116 ◽  
Author(s):  
Yonghua Zhu ◽  
Xun Gao ◽  
Weilin Zhang ◽  
Shenkai Liu ◽  
Yuanyuan Zhang

The prevalence that people share their opinions on the products and services in their daily lives on the Internet has generated a large quantity of comment data, which contain great business value. As for comment sentences, they often contain several comment aspects and the sentiment on these aspects are different, which makes it meaningless to give an overall sentiment polarity of the sentence. In this paper, we introduce Attention-based Aspect-level Recurrent Convolutional Neural Network (AARCNN) to analyze the remarks at aspect-level. The model integrates attention mechanism and target information analysis, which enables the model to concentrate on the important parts of the sentence and to make full use of the target information. The model uses bidirectional LSTM (Bi-LSTM) to build the memory of the sentence, and then CNN is applied to extracting attention from memory to get the attentive sentence representation. The model uses aspect embedding to analyze the target information of the representation and finally the model outputs the sentiment polarity through a softmax layer. The model was tested on multi-language datasets, and demonstrated that it has better performance than conventional deep learning methods.


2021 ◽  
Vol 5 (3) ◽  
pp. 584-593
Author(s):  
Naufal Hilmiaji ◽  
Kemas Muslim Lhaksmana ◽  
Mahendra Dwifebri Purbolaksono

especially with the advancement of deep learning methods for text classification. Despite some effort to identify emotion on Indonesian tweets, its performance evaluation results have not achieved acceptable numbers. To solve this problem, this paper implements a classification model using a convolutional neural network (CNN), which has demonstrated expected performance in text classification. To easily compare with the previous research, this classification is performed on the same dataset, which consists of 4,403 tweets in Indonesian that were labeled using five different emotion classes: anger, fear, joy, love, and sadness. The performance evaluation results achieve the precision, recall, and F1-score at respectively 90.1%, 90.3%, and 90.2%, while the highest accuracy achieves 89.8%. These results outperform previous research that classifies the same classification on the same dataset.


Sign in / Sign up

Export Citation Format

Share Document