scholarly journals Yield Estimation of High-Density Cotton Fields Using Low-Altitude UAV Imaging and Deep Learning

Author(s):  
Fei Li ◽  
Jingya Bai ◽  
Mengyun Zhang ◽  
Ruoyu Zhang

Abstract Background: Different from other parts of the world, China has its own cotton planting pattern. Cotton are densely planted in wide-narrow rows to increase yield in Xinjiang, China, causing the difficulty in the accurate evaluation of cotton yields using remote sensing in such field with branches occluded and overlapped. Results: In this study, low-altitude unmanned aerial vehicle (UAV) imaging and deep convolutional neural networks (DCNN) were used to estimate the yields of densely planted cotton. Images of cotton field were acquired by an UAV at the height of 5 m. Cotton bolls were manually harvested and weighted afterwards. Then, a modified DCNN model was developed by applying encoder-decoder recombination and dilated convolution for pixel-wise cotton boll segmentation termed CD-SegNet. Linear regression analysis was employed to build up the relationship between cotton boll pixels ratio and cotton yield. Yield estimations of four cotton fields were verified after machine harvest and weighting. The results showed that CD-SegNet outperformed the other tested models including SegNet, support vector machine (SVM), and random forest (RF). The average error of the estimated yield of the cotton fields was 6.2%. Conclusions: Overall, the yield estimation of densely planted cotton based on lowaltitude UAV imaging is feasible. This study provides a methodological reference for cotton yield estimation in China.

2019 ◽  
Vol 62 (2) ◽  
pp. 393-404 ◽  
Author(s):  
Aijing Feng ◽  
Meina Zhang ◽  
Kenneth A. Sudduth ◽  
Earl D. Vories ◽  
Jianfeng Zhou

Abstract. Accurate estimation of crop yield before harvest, especially in early growth stages, is important for farmers and researchers to optimize field management and evaluate crop performance. However, existing in-field methods for estimating crop yield are not efficient. The goal of this research was to evaluate the performance of a UAV-based remote sensing system with a low-cost RGB camera to estimate cotton yield based on plant height. The UAV system acquired images at 50 m above ground level over a cotton field at the first flower growth stage. Waypoints and flight speed were selected to allow >70% image overlap in both forward and side directions. Images were processed to develop a geo-referenced orthomosaic image and a digital elevation model (DEM) of the field that was used to extract plant height by calculating the difference in elevation between the crop canopy and bare soil surface. Twelve ground reference points with known height were deployed in the field to validate the UAV-based height measurement. Geo-referenced yield data were aligned to the plant height map based on GPS and image features. Correlation analysis between yield and plant height was conducted row-by-row with and without row registration. Pearson correlation coefficients between yield and plant height with row registration for all individual rows were in the range of 0.66 to 0.96 and were higher than those without row registration (0.54 to 0.95). A linear regression model using plant height was able to estimate yield with root mean square error of 550 kg ha-1 and mean absolute error of 420 kg ha-1. Locations with low yield were analyzed to identify the potential reasons, and it was found that water stress and coarse soil texture, as indicated by low soil apparent electricity conductivity (ECa), might contribute to the low yield. The findings indicate that the UAV-based remote sensing system equipped with a low-cost digital camera was potentially able to monitor plant growth status and estimate cotton yield with acceptable errors. Keywords: Cotton, Geo-registration, Plant height, UAV-based remote sensing, Yield estimation.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5184
Author(s):  
Linghua Meng ◽  
Huanjun Liu ◽  
Susan L. Ustin ◽  
Xinle Zhang

Research on fusion modeling of high spatial and temporal resolution images typically uses MODIS products at 500 m and 250 m resolution with Landsat images at 30 m, but the effect on results of the date of reference images and the ‘mixed pixels’ nature of moderate-resolution imaging spectroradiometer (MODIS) images are not often considered. In this study, we evaluated those effects using the flexible spatiotemporal data fusion model (FSDAF) to generate fusion images with both high spatial resolution and frequent coverage over three cotton field plots in the San Joaquin Valley of California, USA. Landsat images of different dates (day-of-year (DOY) 174, 206, and 254, representing early, middle, and end stages of the growing season, respectively) were used as reference images in fusion with two MODIS products (MOD09GA and MOD13Q1) to produce new time-series fusion images with improved temporal sampling over that provided by Landsat alone. The impact on the accuracy of yield estimation of the different Landsat reference dates, as well as the degree of mixing of the two MODIS products, were evaluated. A mixed degree index (MDI) was constructed to evaluate the accuracy and time-series fusion results of the different cotton plots, after which the different yield estimation models were compared. The results show the following: (1) there is a strong correlation (above 0.6) between cotton yield and both the Normalized Difference Vegetation Index (NDVI) from Landsat (NDVIL30) and NDVI from the fusion of Landsat with MOD13Q1 (NDVIF250). (2) Use of a mid-season Landsat image as reference for the fusion of MODIS imagery provides a better yield estimation, 14.73% and 17.26% higher than reference images from early or late in the season, respectively. (3) The accuracy of the yield estimation model of the three plots is different and relates to the MDI of the plots and the types of surrounding crops. These results can be used as a reference for data fusion for vegetation monitoring using remote sensing at the field scale.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4520
Author(s):  
Luis Lopes Chambino ◽  
José Silvestre Silva ◽  
Alexandre Bernardino

Facial recognition is a method of identifying or authenticating the identity of people through their faces. Nowadays, facial recognition systems that use multispectral images achieve better results than those that use only visible spectral band images. In this work, a novel architecture for facial recognition that uses multiple deep convolutional neural networks and multispectral images is proposed. A domain-specific transfer-learning methodology applied to a deep neural network pre-trained in RGB images is shown to generalize well to the multispectral domain. We also propose a skin detector module for forgery detection. Several experiments were planned to assess the performance of our methods. First, we evaluate the performance of the forgery detection module using face masks and coverings of different materials. A second study was carried out with the objective of tuning the parameters of our domain-specific transfer-learning methodology, in particular which layers of the pre-trained network should be retrained to obtain good adaptation to multispectral images. A third study was conducted to evaluate the performance of support vector machines (SVM) and k-nearest neighbor classifiers using the embeddings obtained from the trained neural network. Finally, we compare the proposed method with other state-of-the-art approaches. The experimental results show performance improvements in the Tufts and CASIA NIR-VIS 2.0 multispectral databases, with a rank-1 score of 99.7% and 99.8%, respectively.


2021 ◽  
Author(s):  
Jamal Ahmadov

Abstract The Tuscaloosa Marine Shale (TMS) formation is a clay- and liquid-rich emerging shale play across central Louisiana and southwest Mississippi with recoverable resources of 1.5 billion barrels of oil and 4.6 trillion cubic feet of gas. The formation poses numerous challenges due to its high average clay content (50 wt%) and rapidly changing mineralogy, making the selection of fracturing candidates a difficult task. While brittleness plays an important role in screening potential intervals for hydraulic fracturing, typical brittleness estimation methods require the use of geomechanical and mineralogical properties from costly laboratory tests. Machine Learning (ML) can be employed to generate synthetic brittleness logs and therefore, may serve as an inexpensive and fast alternative to the current techniques. In this paper, we propose the use of machine learning to predict the brittleness index of Tuscaloosa Marine Shale from conventional well logs. We trained ML models on a dataset containing conventional and brittleness index logs from 8 wells. The latter were estimated either from geomechanical logs or log-derived mineralogy. Moreover, to ensure mechanical data reliability, dynamic-to-static conversion ratios were applied to Young's modulus and Poisson's ratio. The predictor features included neutron porosity, density and compressional slowness logs to account for the petrophysical and mineralogical character of TMS. The brittleness index was predicted using algorithms such as Linear, Ridge and Lasso Regression, K-Nearest Neighbors, Support Vector Machine (SVM), Decision Tree, Random Forest, AdaBoost and Gradient Boosting. Models were shortlisted based on the Root Mean Square Error (RMSE) value and fine-tuned using the Grid Search method with a specific set of hyperparameters for each model. Overall, Gradient Boosting and Random Forest outperformed other algorithms and showed an average error reduction of 5 %, a normalized RMSE of 0.06 and a R-squared value of 0.89. The Gradient Boosting was chosen to evaluate the test set and successfully predicted the brittleness index with a normalized RMSE of 0.07 and R-squared value of 0.83. This paper presents the practical use of machine learning to evaluate brittleness in a cost and time effective manner and can further provide valuable insights into the optimization of completion in TMS. The proposed ML model can be used as a tool for initial screening of fracturing candidates and selection of fracturing intervals in other clay-rich and heterogeneous shale formations.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Osama Siddig ◽  
Ahmed Farid Ibrahim ◽  
Salaheldin Elkatatny

Unconventional resources have recently gained a lot of attention, and as a consequence, there has been an increase in research interest in predicting total organic carbon (TOC) as a crucial quality indicator. TOC is commonly measured experimentally; however, due to sampling restrictions, obtaining continuous data on TOC is difficult. Therefore, different empirical correlations for TOC have been presented. However, there are concerns about the generalization and accuracy of these correlations. In this paper, different machine learning (ML) techniques were utilized to develop models that predict TOC from well logs, including formation resistivity (FR), spontaneous potential (SP), sonic transit time (Δt), bulk density (RHOB), neutron porosity (CNP), gamma ray (GR), and spectrum logs of thorium (Th), uranium (Ur), and potassium (K). Over 1250 data points from the Devonian Duvernay shale were utilized to create and validate the model. These datasets were obtained from three wells; the first was used to train the models, while the data sets from the other two wells were utilized to test and validate them. Support vector machine (SVM), random forest (RF), and decision tree (DT) were the ML approaches tested, and their predictions were contrasted with three empirical correlations. Various AI methods’ parameters were tested to assure the best possible accuracy in terms of correlation coefficient (R) and average absolute percentage error (AAPE) between the actual and predicted TOC. The three ML methods yielded good matches; however, the RF-based model has the best performance. The RF model was able to predict the TOC for the different datasets with R values range between 0.93 and 0.99 and AAPE values less than 14%. In terms of average error, the ML-based models outperformed the other three empirical correlations. This study shows the capability and robustness of ML models to predict the total organic carbon from readily available logging data without the need for core analysis or additional well interventions.


Author(s):  
Aldjia Boucetta ◽  
Leila Boussaad

Finger-vein identification, a biometric technology that uses vein patterns in the human finger to identify people. In recent years, it has received increasing attention due to its tremendous advantages compared to fingerprint characteristics. Moreover, Deep-Convolutional Neural Networks (Deep-CNN) appeared to be highly successful for feature extraction in the finger-vein area, and most of the proposed works focus on new Convolutional Neural Network (CNN) models, which require huge databases for training, a solution that may be more practicable in real world applications, is to reuse pretrained Deep-CNN models. In this paper, a finger-vein identification system is proposed, which uses Squeezenet pretrained Deep-CNN model as feature extractor from the left and the right finger vein patterns. Then, combines this Deep-based features by using a feature-level Discriminant Correlation Analysis (DCA) to reduce feature dimensions and to give the most relevant features. Finally, these composite feature vectors are used as input data for a Support Vector Machine (SVM) classifier, in an identification stage. This method is tested on two widely available finger vein databases, namely SDUMLA-HMT and FV-USM. Experimental results show that the proposed finger vein identification system achieves significant high mean accuracy rates.


Sign in / Sign up

Export Citation Format

Share Document