Extracting field boundaries from satellite imagery with a convolutional neural network to enable smart farming at scale

Author(s):  
Franz Waldner ◽  
Foivos Diakogiannis

<p>Many of the promises of smart farming centre on assisting farmers to monitor their fields throughout the growing season. Having precise field boundaries has thus become a prerequisite for field-level assessment. When farmers are being signed up by agricultural service providers, they are often asked for precise digital records of their boundaries. Unfortunately, this process remains largely manual, time-consuming and prone to errors which creates disincentives.  There are also increasing applications whereby remote monitoring of crops using earth observation is used for estimating areas of crop planted and yield forecasts. Automating the extraction of field boundaries would facilitate bringing farmers on board, and hence fostering wider adoption of these services, but would also improve products and services to be provided using remote sensing. Several methods to extract field boundaries from satellite imagery have been proposed, but the apparent lack of field boundary data sets seems to indicate low uptake, presumably because of expensive image preprocessing requirements and local, often arbitrary, tuning. Here, we introduce a novel approach with low image preprocessing requirements to extract field boundaries from satellite imagery. It poses the problem as a semantic segmentation problem with three tasks designed to answer the following questions:  1) Does a given pixel belong to a field? 2) Is that pixel part of a field boundary? and 3) What is the distance from that pixel to the closest field boundary? Closed field boundaries and individual fields can then be extracted by combining the answers to these three questions. The tasks are performed with ResUNet-a, a deep convolutional neural network with a fully connected UNet backbone that features dilated convolutions and conditioned inference. First, we characterise the model’s performance at local scale. Using a single composite image from Sentinel-2 over South Africa, the model is highly accurate in mapping field extent, field boundaries, and, consequently, individual fields. Replacing the monthly composite with a single-date image close to the compositing period marginally decreases accuracy. We then show that, without recalibration, ResUNet-a generalises well across resolutions (10 m to 30 m), sensors (Sentinel-2 to Landsat-8), space and time. Averaging model predictions from at least four images well-distributed across the season is the key to coping with the temporal variations of accuracy.  Finally, we apply the lessons learned from the previous experiments to extract field boundaries for the whole of the Australian cropping region. To that aim, we compare three ResUNet-a models which are trained with different data sets: field boundaries from Australia, field boundaries from overseas, and field boundaries from both Australia and overseas (transfer learning).   By minimising image preprocessing requirements and replacing local arbitrary decisions by data-driven ones, our approach is expected to facilitate the adoption of smart farming services and improve land management at scale.</p>

2018 ◽  
Vol 10 (10) ◽  
pp. 1572 ◽  
Author(s):  
Chunping Qiu ◽  
Michael Schmitt ◽  
Lichao Mou ◽  
Pedram Ghamisi ◽  
Xiao Zhu

Global Local Climate Zone (LCZ) maps, indicating urban structures and land use, are crucial for Urban Heat Island (UHI) studies and also as starting points to better understand the spatio-temporal dynamics of cities worldwide. However, reliable LCZ maps are not available on a global scale, hindering scientific progress across a range of disciplines that study the functionality of sustainable cities. As a first step towards large-scale LCZ mapping, this paper tries to provide guidance about data/feature choice. To this end, we evaluate the spectral reflectance and spectral indices of the globally available Sentinel-2 and Landsat-8 imagery, as well as the Global Urban Footprint (GUF) dataset, the OpenStreetMap layers buildings and land use and the Visible Infrared Imager Radiometer Suite (VIIRS)-based Nighttime Light (NTL) data, regarding their relevance for discriminating different Local Climate Zones (LCZs). Using a Residual convolutional neural Network (ResNet), a systematic analysis of feature importance is performed with a manually-labeled dataset containing nine cities located in Europe. Based on the investigation of the data and feature choice, we propose a framework to fully exploit the available datasets. The results show that GUF, OSM and NTL can contribute to the classification accuracy of some LCZs with relatively few samples, and it is suggested that Landsat-8 and Sentinel-2 spectral reflectances should be jointly used, for example in a majority voting manner, as proven by the improvement from the proposed framework, for large-scale LCZ mapping.


2020 ◽  
Author(s):  
Etienne Clabaut ◽  
◽  
Myriam Lemelin ◽  
Mickaël Germain ◽  
Marie-Claude Williamson ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1592
Author(s):  
Jonguk Kim ◽  
Hyansu Bae ◽  
Hyunwoo Kang ◽  
Suk Gyu Lee

This paper suggests an algorithm for extracting the location of a building from satellite imagery and using that information to modify the roof content. The materials are determined by measuring the conditions where the building is located and detecting the position of a building in broad satellite images. Depending on the incomplete roof or material, there is a greater possibility of great damage caused by disaster situations or external shocks. To address these problems, we propose an algorithm to detect roofs and classify materials in satellite images. Satellite imaging locates areas where buildings are likely to exist based on roads. Using images of the detected buildings, we classify the material of the roof using a proposed convolutional neural network (CNN) model algorithm consisting of 43 layers. In this paper, we propose a CNN structure to detect areas with buildings in large images and classify roof materials in the detected areas.


2021 ◽  
Vol 16 ◽  
pp. 155892502110050
Author(s):  
Junli Luo ◽  
Kai Lu ◽  
Yueqi Zhong ◽  
Boping Zhang ◽  
Huizhu Lv

Wool fiber and cashmere fiber are similar in physical and morphological characteristics. Thus, the identification of these two fibers has always been a challenging proposition. This study identifies five kinds of cashmere and wool fibers using a convolutional neural network model. To this end, image preprocessing was first performed. Then, following the VGGNet model, a convolutional neural network with 13 weight layers was established. A dataset with 50,000 fiber images was prepared for training and testing this newly established model. In the classification layer of the model, softmax regression was used to calculate the probability value of the input fiber image for each category, and the category with the highest probability value was selected as the prediction category of the fiber. In this experiment, the total identification accuracy of samples in the test set is close to 93%. Among these five fibers, Mongolian brown cashmere has the highest identification accuracy, reaching 99.7%. The identification accuracy of Chinese white cashmere is the lowest at 86.4%. Experimental results show that our model is an effective approach to the identification of multi-classification fiber.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 949
Author(s):  
Jiangyi Wang ◽  
Min Liu ◽  
Xinwu Zeng ◽  
Xiaoqiang Hua

Convolutional neural networks have powerful performances in many visual tasks because of their hierarchical structures and powerful feature extraction capabilities. SPD (symmetric positive definition) matrix is paid attention to in visual classification, because it has excellent ability to learn proper statistical representation and distinguish samples with different information. In this paper, a deep neural network signal detection method based on spectral convolution features is proposed. In this method, local features extracted from convolutional neural network are used to construct the SPD matrix, and a deep learning algorithm for the SPD matrix is used to detect target signals. Feature maps extracted by two kinds of convolutional neural network models are applied in this study. Based on this method, signal detection has become a binary classification problem of signals in samples. In order to prove the availability and superiority of this method, simulated and semi-physical simulated data sets are used. The results show that, under low SCR (signal-to-clutter ratio), compared with the spectral signal detection method based on the deep neural network, this method can obtain a gain of 0.5–2 dB on simulated data sets and semi-physical simulated data sets.


2021 ◽  
Vol 23 (07) ◽  
pp. 1116-1120
Author(s):  
Cijil Benny ◽  

This paper is on analyzing the feasibility of AI studies and the involvement of AI in COVID interrelated treatments. In all, several procedures were reviewed and studied. It was on point. The best-analyzing methods on the studies were Susceptible Infected Recovered and Susceptible Exposed Infected Removed respectively. Whereas the implementation of AI is mostly done in X-rays and CT- Scans with the help of a Convolutional Neural Network. To accomplish the paper several data sets are used. They include medical and case reports, medical strategies, and persons respectively. Approaches are being done through shared statistical analysis based on these reports. Considerably the acceptance COVID is being shared and it is also reachable. Furthermore, much regulation is needed for handling this pandemic since it is a threat to global society. And many more discoveries shall be made in the medical field that uses AI as a primary key source.


Sign in / Sign up

Export Citation Format

Share Document