scholarly journals Chlorophyll content for millet leaf using hyperspectral imaging and an attention-convolutional neural network

2020 ◽  
Vol 50 (3) ◽  
Author(s):  
Wang Xiaoyan ◽  
Li Zhiwei ◽  
Wang Wenjun ◽  
Wang Jiawei

ABSTRACT: Chlorophyll is a major factor affecting photosynthesis; and consequently, crop growth and yield. In this study, we devised a chlorophyll-content detection model for millet leaves in different stages of growth based on hyperspectral data. The hyperspectral images of millet leaves were obtained under a wavelength range of 380-1000 nm using a hyperspectral imager. Threshold segmentation was performed with near-infrared (NIR) reflectance and normalized difference vegetation index (NDVI) to intelligently acquire the regions of interest (ROI). Furthermore, raw spectral data were preprocessed using multivariate scatter correction (MSC). A correlation coefficient-successive projections algorithm (CC-SPA) was used to extract the characteristic wavelengths, and the characteristic parameters were extracted based on the spectral and image information. A partial least squares regression (PLSR) prediction model was established based on the single characteristic parameter and multi-characteristic parameter fusion. The determination coefficient (Rv 2) and the root-mean-square error (RMSEv) of the validation set for the multi-characteristic parameter fusion model were reported to be 0.813 and 1.766, respectively, which are higher than those obtained by the single characteristic parameter model. Based on the multi-characteristic parameter fusion, an attention-convolutional neural network (attention-CNN) (Rv 2 = 0.839, RMSEv = 1.451, RPD = 2.355) was established, which is more effective than the PLSR (Rv 2 = 0.813, RMSEv = 1.766, RPD = 2.167) and least squares support vector machine (LS-SVM) models (Rv 2 = 0.806, RMSEv = 1.576, RPD = 2.061). These results indicated that the combination of hyperspectral imaging and attention-CNN is beneficial to the application of nutrient element monitoring of crops.

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


2018 ◽  
Vol 7 (11) ◽  
pp. 418 ◽  
Author(s):  
Tian Jiang ◽  
Xiangnan Liu ◽  
Ling Wu

Accurate and timely information about rice planting areas is essential for crop yield estimation, global climate change and agricultural resource management. In this study, we present a novel pixel-level classification approach that uses convolutional neural network (CNN) model to extract the features of enhanced vegetation index (EVI) time series curve for classification. The goal is to explore the practicability of deep learning techniques for rice recognition in complex landscape regions, where rice is easily confused with the surroundings, by using mid-resolution remote sensing images. A transfer learning strategy is utilized to fine tune a pre-trained CNN model and obtain the temporal features of the EVI curve. Support vector machine (SVM), a traditional machine learning approach, is also implemented in the experiment. Finally, we evaluate the accuracy of the two models. Results show that our model performs better than SVM, with the overall accuracies being 93.60% and 91.05%, respectively. Therefore, this technique is appropriate for estimating rice planting areas in southern China on the basis of a pre-trained CNN model by using time series data. And more opportunity and potential can be found for crop classification by remote sensing and deep learning technique in the future study.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6666
Author(s):  
Kamil Książek ◽  
Michał Romaszewski ◽  
Przemysław Głomb ◽  
Bartosz Grabowski ◽  
Michał Cholewa

In recent years, growing interest in deep learning neural networks has raised a question on how they can be used for effective processing of high-dimensional datasets produced by hyperspectral imaging (HSI). HSI, traditionally viewed as being within the scope of remote sensing, is used in non-invasive substance classification. One of the areas of potential application is forensic science, where substance classification on the scenes is important. An example problem from that area—blood stain classification—is a case study for the evaluation of methods that process hyperspectral data. To investigate the deep learning classification performance for this problem we have performed experiments on a dataset which has not been previously tested using this kind of model. This dataset consists of several images with blood and blood-like substances like ketchup, tomato concentrate, artificial blood, etc. To test both the classic approach to hyperspectral classification and a more realistic application-oriented scenario, we have prepared two different sets of experiments. In the first one, Hyperspectral Transductive Classification (HTC), both a training and a test set come from the same image. In the second one, Hyperspectral Inductive Classification (HIC), a test set is derived from a different image, which is more challenging for classifiers but more useful from the point of view of forensic investigators. We conducted the study using several architectures like 1D, 2D and 3D convolutional neural networks (CNN), a recurrent neural network (RNN) and a multilayer perceptron (MLP). The performance of the models was compared with baseline results of Support Vector Machine (SVM). We have also presented a model evaluation method based on t-SNE and confusion matrix analysis that allows us to detect and eliminate some cases of model undertraining. Our results show that in the transductive case, all models, including the MLP and the SVM, have comparative performance, with no clear advantage of deep learning models. The Overall Accuracy range across all models is 98–100% for the easier image set, and 74–94% for the more difficult one. However, in a more challenging inductive case, selected deep learning architectures offer a significant advantage; their best Overall Accuracy is in the range of 57–71%, improving the baseline set by the non-deep models by up to 9 percentage points. We have presented a detailed analysis of results and a discussion, including a summary of conclusions for each tested architecture. An analysis of per-class errors shows that the score for each class is highly model-dependent. Considering this and the fact that the best performing models come from two different architecture families (3D CNN and RNN), our results suggest that tailoring the deep neural network architecture to hyperspectral data is still an open problem.


Molecules ◽  
2018 ◽  
Vol 23 (11) ◽  
pp. 2831 ◽  
Author(s):  
Na Wu ◽  
Chu Zhang ◽  
Xiulin Bai ◽  
Xiaoyue Du ◽  
Yong He

Rapid and accurate discrimination of Chrysanthemum varieties is very important for producers, consumers and market regulators. The feasibility of using hyperspectral imaging combined with deep convolutional neural network (DCNN) algorithm to identify Chrysanthemum varieties was studied in this paper. Hyperspectral images in the spectral range of 874–1734 nm were collected for 11,038 samples of seven varieties. Principal component analysis (PCA) was introduced for qualitative analysis. Score images of the first five PCs were used to explore the differences between different varieties. Second derivative (2nd derivative) method was employed to select optimal wavelengths. Support vector machine (SVM), logistic regression (LR), and DCNN were used to construct discriminant models using full wavelengths and optimal wavelengths. The results showed that all models based on full wavelengths achieved better performance than those based on optimal wavelengths. DCNN based on full wavelengths obtained the best results with an accuracy close to 100% on both training set and testing set. This optimal model was utilized to visualize the classification results. The overall results indicated that hyperspectral imaging combined with DCNN was a very powerful tool for rapid and accurate discrimination of Chrysanthemum varieties. The proposed method exhibited important potential for developing an online Chrysanthemum evaluation system.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4065 ◽  
Author(s):  
Zhu ◽  
Zhou ◽  
Zhang ◽  
Bao ◽  
Wu ◽  
...  

Soybean variety is connected to stress resistance ability, as well as nutritional and commercial value. Near-infrared hyperspectral imaging was applied to classify three varieties of soybeans (Zhonghuang37, Zhonghuang41, and Zhonghuang55). Pixel-wise spectra were extracted and preprocessed, and average spectra were also obtained. Convolutional neural networks (CNN) using the average spectra and pixel-wise spectra of different numbers of soybeans were built. Pixel-wise CNN models obtained good performance predicting pixel-wise spectra and average spectra. With the increase of soybean numbers, performances were improved, with the classification accuracy of each variety over 90%. Traditionally, the number of samples used for modeling is large. It is time-consuming and requires labor to obtain hyperspectral data from large batches of samples. To explore the possibility of achieving decent identification results with few samples, a majority vote was also applied to the pixel-wise CNN models to identify a single soybean variety. Prediction maps were obtained to present the classification results intuitively. Models using pixel-wise spectra of 60 soybeans showed equivalent performance to those using the average spectra of 810 soybeans, illustrating the possibility of discriminating soybean varieties using few samples by acquiring pixel-wise spectra.


2019 ◽  
Vol 8 (2) ◽  
pp. 3960-3963

In this paper, we have done exploratory experiments using deep learning convolutional neural network framework to classify crops into cotton, sugarcane and mulberry. In this contribution we have used Earth Observing-1 hyperion hyperspectral remote sensing data as the input. Structured data has been extracted from hyperspectral data using a remote sensing tool. An analytical assessment shows that convolutional neural network (CNN) gives more accuracy over classical support vector machine (SVM) and random forest methods. It has been observed that accuracy of SVM is 75 %, accuracy of random forest classification is 78 % and accuracy of CNN using Adam optimizer is 99.3 % and loss is 2.74 %. CNN using RMSProp also gives the same accuracy 99.3 % and the loss is 4.43 %. This identified crop information will be used for finding crop production and for deciding market strategies


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3459
Author(s):  
Véronique Gomes ◽  
Ana Mendes-Ferreira ◽  
Pedro Melo-Pinto

Remote sensing technology, such as hyperspectral imaging, in combination with machine learning algorithms, has emerged as a viable tool for rapid and nondestructive assessment of wine grape ripeness. However, the differences in terroir, together with the climatic variations and the variability exhibited by different grape varieties, have a considerable impact on the grape ripening stages within a vintage and between vintages and, consequently, on the robustness of the predictive models. To address this challenge, we present a novel one-dimensional convolutional neural network architecture-based model for the prediction of sugar content and pH, using reflectance hyperspectral data from different vintages. We aimed to evaluate the model’s generalization capacity for different varieties and for a different vintage not employed in the training process, using independent test sets. A transfer learning mechanism, based on the proposed convolutional neural network, was also used to evaluate improvements in the model’s generalization. Overall, the results for generalization ability showed a very good performance with RMSEP values of 1.118 °Brix and 1.085 °Brix for sugar content and 0.199 and 0.183 for pH, for test sets using different varieties and a different vintage, respectively, improving and updating the current state of the art.


Author(s):  
Niha Kamal Basha ◽  
Aisha Banu Wahab

: Absence seizure is a type of brain disorder in which subject get into sudden lapses in attention. Which means sudden change in brain stimulation. Most of this type of disorder is widely found in children’s (5-18 years). These Electroencephalogram (EEG) signals are captured with long term monitoring system and are analyzed individually. In this paper, a Convolutional Neural Network to extract single channel EEG seizure features like Power, log sum of wavelet transform, cross correlation, and mean phase variance of each frame in a windows are extracted after pre-processing and classify them into normal or absence seizure class, is proposed as an empowerment of monitoring system by automatic detection of absence seizure. The training data is collected from the normal and absence seizure subjects in the form of Electroencephalogram. The objective is to perform automatic detection of absence seizure using single channel electroencephalogram signal as input. Here the data is used to train the proposed Convolutional Neural Network to extract and classify absence seizure. The Convolutional Neural Network consist of three layers 1] convolutional layer – which extract the features in the form of vector 2] Pooling layer – the dimensionality of output from convolutional layer is reduced and 3] Fully connected layer–the activation function called soft-max is used to find the probability distribution of output class. This paper goes through the automatic detection of absence seizure in detail and provide the comparative analysis of classification between Support Vector Machine and Convolutional Neural Network. The proposed approach outperforms the performance of Support Vector Machine by 80% in automatic detection of absence seizure and validated using confusion matrix.


Author(s):  
Wanli Wang ◽  
Botao Zhang ◽  
Kaiqi Wu ◽  
Sergey A Chepinskiy ◽  
Anton A Zhilenkov ◽  
...  

In this paper, a hybrid method based on deep learning is proposed to visually classify terrains encountered by mobile robots. Considering the limited computing resource on mobile robots and the requirement for high classification accuracy, the proposed hybrid method combines a convolutional neural network with a support vector machine to keep a high classification accuracy while improve work efficiency. The key idea is that the convolutional neural network is used to finish a multi-class classification and simultaneously the support vector machine is used to make a two-class classification. The two-class classification performed by the support vector machine is aimed at one kind of terrain that users are mostly concerned with. Results of the two classifications will be consolidated to get the final classification result. The convolutional neural network used in this method is modified for the on-board usage of mobile robots. In order to enhance efficiency, the convolutional neural network has a simple architecture. The convolutional neural network and the support vector machine are trained and tested by using RGB images of six kinds of common terrains. Experimental results demonstrate that this method can help robots classify terrains accurately and efficiently. Therefore, the proposed method has a significant potential for being applied to the on-board usage of mobile robots.


Sign in / Sign up

Export Citation Format

Share Document