scholarly journals Assessment of Texture Features for Bermudagrass (Cynodon dactylon) Detection in Sugarcane Plantations

Drones ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 36 ◽  
Author(s):  
Cesare Di Girolamo-Neto ◽  
Ieda Del’Arco Sanches ◽  
Alana Kasahara Neves ◽  
Victor Hugo Rohden Prudente ◽  
Thales Sehn Körting ◽  
...  

Sugarcane products contribute significantly to the Brazilian economy, generating U.S. $12.2 billion in revenue in 2018. Identifying and monitoring factors that induce yield reduction, such as weed occurrence, is thus imperative. The detection of Bermudagrass in sugarcane crops using remote sensing data, however, is a challenge considering their spectral similarity. To overcome this limitation, this paper aims to explore the potential of texture features derived from images acquired by an optical sensor onboard anunmanned aerial vehicle (UAV) to detect Bermudagrass in sugarcane. Aerial images with a spatial resolution of 2 cm were acquired from a sugarcane field in Brazil. The Green-Red Vegetation Index and several texture metrics derived from the gray-level co-occurrence matrix were calculated to perform an automatic classification using arandom forest algorithm. Adding texture metrics to the classification process improved the overall accuracy from 83.00% to 92.54%, and this improvement was greater considering larger window sizes, since they representeda texture transition between two targets. Production losses induced by Bermudagrass presence reached 12.1 tons × ha−1 in the study site. This study not only demonstrated the capacity of UAV images to overcome the well-known limitation of detecting Bermudagrass in sugarcane crops, but also highlighted the importance of texture for high-accuracy quantification of weed invasion in sugarcane crops.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jing Zhou ◽  
Huawei Mou ◽  
Jianfeng Zhou ◽  
Md Liakat Ali ◽  
Heng Ye ◽  
...  

Soybean is sensitive to flooding stress that may result in poor seed quality and significant yield reduction. Soybean production under flooding could be sustained by developing flood-tolerant cultivars through breeding programs. Conventionally, soybean tolerance to flooding in field conditions is evaluated by visually rating the shoot injury/damage due to flooding stress, which is labor-intensive and subjective to human error. Recent developments of field high-throughput phenotyping technology have shown great potential in measuring crop traits and detecting crop responses to abiotic and biotic stresses. The goal of this study was to investigate the potential in estimating flood-induced soybean injuries using UAV-based image features collected at different flight heights. The flooding injury score (FIS) of 724 soybean breeding plots was taken visually by breeders when soybean showed obvious injury symptoms. Aerial images were taken on the same day using a five-band multispectral and an infrared (IR) thermal camera at 20, 50, and 80 m above ground. Five image features, i.e., canopy temperature, normalized difference vegetation index, canopy area, width, and length, were extracted from the images at three flight heights. A deep learning model was used to classify the soybean breeding plots to five FIS ratings based on the extracted image features. Results show that the image features were significantly different at three flight heights. The best classification performance was obtained by the model developed using image features at 20 m with 0.9 for the five-level FIS. The results indicate that the proposed method is very promising in estimating FIS for soybean breeding.


2019 ◽  
Vol 55 (9) ◽  
pp. 1329-1337
Author(s):  
N. V. Gopp ◽  
T. V. Nechaeva ◽  
O. A. Savenkov ◽  
N. V. Smirnova ◽  
V. V. Smirnov

Author(s):  
Weiguo Cao ◽  
Marc J. Pomeroy ◽  
Yongfeng Gao ◽  
Matthew A. Barish ◽  
Almas F. Abbasi ◽  
...  

AbstractTexture features have played an essential role in the field of medical imaging for computer-aided diagnosis. The gray-level co-occurrence matrix (GLCM)-based texture descriptor has emerged to become one of the most successful feature sets for these applications. This study aims to increase the potential of these features by introducing multi-scale analysis into the construction of GLCM texture descriptor. In this study, we first introduce a new parameter - stride, to explore the definition of GLCM. Then we propose three multi-scaling GLCM models according to its three parameters, (1) learning model by multiple displacements, (2) learning model by multiple strides (LMS), and (3) learning model by multiple angles. These models increase the texture information by introducing more texture patterns and mitigate direction sparsity and dense sampling problems presented in the traditional Haralick model. To further analyze the three parameters, we test the three models by performing classification on a dataset of 63 large polyp masses obtained from computed tomography colonoscopy consisting of 32 adenocarcinomas and 31 benign adenomas. Finally, the proposed methods are compared to several typical GLCM-texture descriptors and one deep learning model. LMS obtains the highest performance and enhances the prediction power to 0.9450 with standard deviation 0.0285 by area under the curve of receiver operating characteristics score which is a significant improvement.


2021 ◽  
Vol 13 (6) ◽  
pp. 1131
Author(s):  
Tao Yu ◽  
Pengju Liu ◽  
Qiang Zhang ◽  
Yi Ren ◽  
Jingning Yao

Detecting forest degradation from satellite observation data is of great significance in revealing the process of decreasing forest quality and giving a better understanding of regional or global carbon emissions and their feedbacks with climate changes. In this paper, a quick and applicable approach was developed for monitoring forest degradation in the Three-North Forest Shelterbelt in China from multi-scale remote sensing data. Firstly, Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Ratio Vegetation Index (RVI), Leaf Area Index (LAI), Fraction of Photosynthetically Active Radiation (FPAR) and Net Primary Production (NPP) from remote sensing data were selected as the indicators to describe forest degradation. Then multi-scale forest degradation maps were obtained by adopting a new classification method using time series MODerate Resolution Imaging Spectroradiometer (MODIS) and Landsat Enhanced Thematic Mapper Plus (ETM+) images, and were validated with ground survey data. At last, the criteria and indicators for monitoring forest degradation from remote sensing data were discussed, and the uncertainly of the method was analyzed. Results of this paper indicated that multi-scale remote sensing data have great potential in detecting regional forest degradation.


2021 ◽  
pp. 1-18
Author(s):  
R.S. Rampriya ◽  
Sabarinathan ◽  
R. Suganya

In the near future, combo of UAV (Unmanned Aerial Vehicle) and computer vision will play a vital role in monitoring the condition of the railroad periodically to ensure passenger safety. The most significant module involved in railroad visual processing is obstacle detection, in which caution is obstacle fallen near track gage inside or outside. This leads to the importance of detecting and segment the railroad as three key regions, such as gage inside, rails, and background. Traditional railroad segmentation methods depend on either manual feature selection or expensive dedicated devices such as Lidar, which is typically less reliable in railroad semantic segmentation. Also, cameras mounted on moving vehicles like a drone can produce high-resolution images, so segmenting precise pixel information from those aerial images has been challenging due to the railroad surroundings chaos. RSNet is a multi-level feature fusion algorithm for segmenting railroad aerial images captured by UAV and proposes an attention-based efficient convolutional encoder for feature extraction, which is robust and computationally efficient and modified residual decoder for segmentation which considers only essential features and produces less overhead with higher performance even in real-time railroad drone imagery. The network is trained and tested on a railroad scenic view segmentation dataset (RSSD), which we have built from real-time UAV images and achieves 0.973 dice coefficient and 0.94 jaccard on test data that exhibits better results compared to the existing approaches like a residual unit and residual squeeze net.


2018 ◽  
Vol 10 (12) ◽  
pp. 2018 ◽  
Author(s):  
Ying She ◽  
Reza Ehsani ◽  
James Robbins ◽  
Josué Nahún Leiva ◽  
Jim Owen

Frequent inventory data of container nurseries is needed by growers to ensure proper management and marketing strategies. In this paper, inventory data are estimated from aerial images. Since there are thousands of nursery species, it is difficult to find a generic classification algorithm for all cases. In this paper, the development of classification methods was confined to three representative categories: green foliage, yellow foliage, and flowering plants. Vegetation index thresholding and the support vector machine (SVM) were used for classification. Classification accuracies greater than 97% were obtained for each case. Based on the classification results, an algorithm based on canopy area mapping was built for counting. The effects of flight altitude, container spacing, and ground cover type were evaluated. Results showed that container spacing and interaction of container spacing with ground cover type have a significant effect on counting accuracy. To mimic the practical shipping and moving process, incomplete blocks with different voids were created. Results showed that the more plants removed from the block, the higher the accuracy. The developed algorithm was tested on irregular- or regular-shaped plants and plants with and without flowers to test the stability of the algorithm, and accuracies greater than 94% were obtained.


2020 ◽  
Vol 43 (1) ◽  
pp. 29-45
Author(s):  
Alex Noel Joseph Raj ◽  
Ruban Nersisson ◽  
Vijayalakshmi G. V. Mahesh ◽  
Zhemin Zhuang

Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA’s. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.


2014 ◽  
Vol 668-669 ◽  
pp. 1041-1044
Author(s):  
Lin Lin Song ◽  
Qing Hu Wang ◽  
Zhi Li Pei

This paper firstly studies the texture features. We construct a gray-difference primitive co-occurrence matrix to extract texture features by combining statistical methods with structural ones. The experiment results show that the features of the gray-difference primitive co-occurrence matrix are more delicate than the traditional gray co-occurrence matrix.


Sign in / Sign up

Export Citation Format

Share Document