scholarly journals Deep Learning and Adaptive Graph-Based Growing Contours for Agricultural Field Extraction

2020 ◽  
Vol 12 (12) ◽  
pp. 1990
Author(s):  
Matthias P. Wagner ◽  
Natascha Oppelt

Field mapping and information on agricultural landscapes is of increasing importance for many applications. Monitoring schemes and national cadasters provide a rich source of information but their maintenance and regular updating is costly and labor-intensive. Automatized mapping of fields based on remote sensing imagery may aid in this task and allow for a faster and more regular observation. Although remote sensing has seen extensive use in agricultural research topics, such as plant health monitoring, crop type classification, yield prediction, and irrigation, field delineation and extraction has seen comparatively little research interest. In this study, we present a field boundary detection technique based on deep learning and a variety of image features, and combine it with the graph-based growing contours (GGC) method to extract agricultural fields in a study area in northern Germany. The boundary detection step only requires red, green, and blue (RGB) data and is therefore largely independent of the sensor used. We compare different image features based on color and luminosity information and evaluate their usefulness for the task of field boundary detection. A model based on texture metrics, gradient information, Hessian matrix eigenvalues, and local statistics showed good results with accuracies up to 88.2%, an area under the ROC curve (AUC) of up to 0.94, and F1 score of up to 0.88. The exclusive use of these universal image features may also facilitate transferability to other regions. We further present modifications to the GGC method intended to aid in upscaling of the method through process acceleration with a minimal effect on results. We combined the boundary detection results with the GGC method for field polygon extraction. Results were promising, with the new GGC version performing similarly or better than the original version while experiencing an acceleration of 1.3× to 2.3× on different subsets and input complexities. Further research may explore other applications of the GGC method outside agricultural remote sensing and field extraction.

2020 ◽  
Vol 12 (21) ◽  
pp. 3524
Author(s):  
Feng Gao ◽  
Martha C. Anderson ◽  
W. Dean Hively

Cover crops are planted during the off-season to protect the soil and improve watershed management. The ability to map cover crop termination dates over agricultural landscapes is essential for quantifying conservation practice implementation, and enabling estimation of biomass accumulation during the active cover period. Remote sensing detection of end-of-season (termination) for cover crops has been limited by the lack of high spatial and temporal resolution observations and methods. In this paper, a new within-season termination (WIST) algorithm was developed to map cover crop termination dates using the Vegetation and Environment monitoring New Micro Satellite (VENµS) imagery (5 m, 2 days revisit). The WIST algorithm first detects the downward trend (senescent period) in the Normalized Difference Vegetation Index (NDVI) time-series and then refines the estimate to the two dates with the most rapid rate of decrease in NDVI during the senescent period. The WIST algorithm was assessed using farm operation records for experimental fields at the Beltsville Agricultural Research Center (BARC). The crop termination dates extracted from VENµS and Sentinel-2 time-series in 2019 and 2020 were compared to the recorded termination operation dates. The results show that the termination dates detected from the VENµS time-series (aggregated to 10 m) agree with the recorded harvest dates with a mean absolute difference of 2 days and uncertainty of 4 days. The operational Sentinel-2 time-series (10 m, 4–5 days revisit) also detected termination dates at BARC but had 7% missing and 10% false detections due to less frequent temporal observations. Near-real-time simulation using the VENµS time-series shows that the average lag times of termination detection are about 4 days for VENµS and 8 days for Sentinel-2, not including satellite data latency. The study demonstrates the potential for operational mapping of cover crop termination using high temporal and spatial resolution remote sensing data.


2021 ◽  
Vol 13 (4) ◽  
pp. 722
Author(s):  
Alireza Taravat ◽  
Matthias P. Wagner ◽  
Rogerio Bonifacio ◽  
David Petit

Accurate spatial information of agricultural fields is important for providing actionable information to farmers, managers, and policymakers. On the other hand, the automated detection of field boundaries is a challenging task due to their small size, irregular shape and the use of mixed-cropping systems making field boundaries vaguely defined. In this paper, we propose a strategy for field boundary detection based on the fully convolutional network architecture called ResU-Net. The benefits of this model are two-fold: first, residual units ease training of deep networks. Second, rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters but better performance in comparison with the traditional U-Net model. An extensive experimental analysis is performed over the whole of Denmark using Sentinel-2 images and comparing several U-Net and ResU-Net field boundary detection algorithms. The presented results show that the ResU-Net model has a better performance with an average F1 score of 0.90 and average Jaccard coefficient of 0.80 in comparison to the U-Net model with an average F1 score of 0.88 and an average Jaccard coefficient of 0.77.


2021 ◽  
Vol 11 (24) ◽  
pp. 11659
Author(s):  
Sheng-Chieh Hung ◽  
Hui-Ching Wu ◽  
Ming-Hseng Tseng

Through the continued development of technology, applying deep learning to remote sensing scene classification tasks is quite mature. The keys to effective deep learning model training are model architecture, training strategies, and image quality. From previous studies of the author using explainable artificial intelligence (XAI), image cases that have been incorrectly classified can be improved when the model has adequate capacity to correct the classification after manual image quality correction; however, the manual image quality correction process takes a significant amount of time. Therefore, this research integrates technologies such as noise reduction, sharpening, partial color area equalization, and color channel adjustment to evaluate a set of automated strategies for enhancing image quality. These methods can enhance details, light and shadow, color, and other image features, which are beneficial for extracting image features from the deep learning model to further improve the classification efficiency. In this study, we demonstrate that the proposed image quality enhancement strategy and deep learning techniques can effectively improve the scene classification performance of remote sensing images and outperform previous state-of-the-art approaches.


Author(s):  
L. Meyer ◽  
F. Lemarchand ◽  
P. Sidiropoulos

Abstract. The accurate split of large areas of land into discrete fields is a crucial step for several agriculture-related remote sensing pipelines. This work aims to fully automate this tedious and resource-demanding process using a state-of-the-art deep learning algorithm with only a single Sentinel-2 image as input. The Mask R-CNN, which has forged its success upon instance segmentation for objects from everyday life, is adapted for the field boundary detection problem. Such model automatically generates closed geometries without any heavy post-processing. When tested with satellite imagery from Denmark, this tailored model correctly predicts field boundaries with an overall accuracy of 0.79. Besides, it demonstrates a robust knowledge generalisation with positive results over different geographies, as it gets an overall accuracy of 0.71 when used over areas in France.


Author(s):  
A. Crespin-Boucaud ◽  
V. Lebourgeois ◽  
D. Lo Seen ◽  
M. Castets ◽  
A. Bégué

Abstract. Smallholder agriculture provides 90 % of primary food production in developing countries. Its mapping is thus a key element for national food security. Remote sensing is widely used for crop mapping, but it is underperforming for smallholder agriculture due to several constraints like small field size, fragmented landscape, highly variable cropping practices or cloudy conditions. In this study, we developed an original approach combining remote sensing and spatial modelling to improve crop type mapping in complex agricultural landscapes. The spatial dynamics are modelled using Ocelet, a domain-specific language based on interaction graphs. The method combines high spatial resolution satellite imagery (Spot 6/7, to characterize the landscape structure through image segmentation), high revisit frequency time series (Sentinel 2, Landsat 8, to monitor the land dynamic processes), and spatiotemporal rules (STrules, to express the strategies and practices of local farmers). The method includes three steps. First, each crop type is defined by a set of general STrules from which a model-based map of crop distribution probability is obtained. Second, a preliminary crop type map is produced using satellite image processing based on a combined Random Forest (RF) and Object Based Image Analysis (OBIA) classification scheme, after which each geographical object is labelled with the class membership probabilities. Finally, the STrules are applied in the model to identify objects with classes locally incompatible with known farming constraints and strategies. The result is a map of the spatial distribution of crop type mapping errors (omission or commission) that are subsequently corrected through the joint use of spatiotemporal rules and RF class membership probabilities. Combining remote sensing and spatial modelling thus provides a viable way to better characterize and monitor complex agricultural systems.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Kai Zhang ◽  
Chengquan Hu ◽  
Hang Yu

Aiming at the problems of high-resolution remote sensing images with many features and low classification accuracy using a single feature description, a remote sensing image land classification model based on deep learning from the perspective of ecological resource utilization is proposed. Firstly, the remote sensing image obtained by Gaofen-1 satellite is preprocessed, including multispectral data and panchromatic data. Then, the color, texture, shape, and local features are extracted from the image data, and the feature-level image fusion method is used to associate these features to realize the fusion of remote sensing image features. Finally, the fused image features are input into the trained depth belief network (DBN) for processing, and the land type is obtained by the Softmax classifier. Based on the Keras and TensorFlow platform, the experimental analysis of the proposed model shows that it can clearly classify all land types, and the overall accuracy, F1 value, and reasoning time of the classification results are 97.86%, 87.25%, and 128 ms, respectively, which are better than other comparative models.


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2019 ◽  
Vol 16 (9) ◽  
pp. 1343-1347 ◽  
Author(s):  
Yibo Sun ◽  
Qiaolin Zeng ◽  
Bing Geng ◽  
Xinwen Lin ◽  
Bilige Sude ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2021 ◽  
pp. 1-11
Author(s):  
Yaning Liu ◽  
Lin Han ◽  
Hexiang Wang ◽  
Bo Yin

Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.


Sign in / Sign up

Export Citation Format

Share Document