scholarly journals Mapping sugarcane in Thailand using transfer learning, a lightweight convolutional Neural Network, NICFI high resolution satellite imagery and Google Earth Engine

Author(s):  
Ate Poortinga ◽  
Nyein Soe Thwal ◽  
Nishanta Khanal ◽  
Timothy Mayer ◽  
Biplov Bhandari ◽  
...  
2020 ◽  
Vol 12 (5) ◽  
pp. 789 ◽  
Author(s):  
Kun Li ◽  
Xiangyun Hu ◽  
Huiwei Jiang ◽  
Zhen Shu ◽  
Mi Zhang

Automatic extraction of region objects from high-resolution satellite imagery presents a great challenge, because there may be very large variations of the objects in terms of their size, texture, shape, and contextual complexity in the image. To handle these issues, we present a novel, deep-learning-based approach to interactively extract non-artificial region objects, such as water bodies, woodland, farmland, etc., from high-resolution satellite imagery. First, our algorithm transforms user-provided positive and negative clicks or scribbles into guidance maps, which consist of a relevance map modified from Euclidean distance maps, two geodesic distance maps (for positive and negative, respectively), and a sampling map. Then, feature maps are extracted by applying a VGG convolutional neural network pre-trained on the ImageNet dataset to the image X, and they are then upsampled to the resolution of X. Image X, guidance maps, and feature maps are integrated as the input tensor. We feed the proposed attention-guided, multi-scale segmentation neural network (AGMSSeg-Net) with the input tensor above to obtain the mask that assigns a binary label to each pixel. After a post-processing operation based on a fully connected Conditional Random Field (CRF), we extract the selected object boundary from the segmentation result. Experiments were conducted on two typical datasets with diverse region object types from complex scenes. The results demonstrate the effectiveness of the proposed method, and our approach outperforms existing methods for interactive image segmentation.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253370
Author(s):  
Ethan Brewer ◽  
Jason Lin ◽  
Peter Kemper ◽  
John Hennin ◽  
Dan Runfola

Recognizing the importance of road infrastructure to promote human health and economic development, actors around the globe are regularly investing in both new roads and road improvements. However, in many contexts there is a sparsity—or complete lack—of accurate information regarding existing road infrastructure, challenging the effective identification of where investments should be made. Previous literature has focused on overcoming this gap through the use of satellite imagery to detect and map roads. In this piece, we extend this literature by leveraging satellite imagery to estimate road quality and concomitant information about travel speed. We adopt a transfer learning approach in which a convolutional neural network architecture is first trained on data collected in the United States (where data is readily available), and then “fine-tuned” on an independent, smaller dataset collected from Nigeria. We test and compare eight different convolutional neural network architectures using a dataset of 53,686 images of 2,400 kilometers of roads in the United States, in which each road segment is measured as “low”, “middle”, or “high” quality using an open, cellphone-based measuring platform. Using satellite imagery to estimate these classes, we achieve an accuracy of 80.0%, with 99.4% of predictions falling within the actual or an adjacent class. The highest performing base model was applied to a preliminary case study in Nigeria, using a dataset of 1,000 images of paved and unpaved roads. By tailoring our US-model on the basis of this Nigeria-specific data, we were able to achieve an accuracy of 94.0% in predicting the quality of Nigerian roads. A continuous case estimate also showed the ability, on average, to predict road quality to within 0.32 on a 0 to 3 scale (with higher values indicating higher levels of quality).


2019 ◽  
Vol 14 (31) ◽  
pp. 81-88
Author(s):  
Anaam Kadhim Hadi

This research presents a new algorithm for classification theshadow and water bodies for high-resolution satellite images (4-meter) of Baghdad city, have been modulated the equations of thecolor space components C1-C2-C3. Have been using the color spacecomponent C3 (blue) for discriminating the shadow, and has beenused C1 (red) to detect the water bodies (river). The new techniquewas successfully tested on many images of the Google earth andIkonos. Experimental results show that this algorithm effective todetect all the types of the shadows with color, and also detects thewater bodies in another color. The benefit of this new technique todiscriminate between the shadows and water in fast Matlab program.


2020 ◽  
Vol 10 (2) ◽  
pp. 602 ◽  
Author(s):  
Min Ji ◽  
Lanfa Liu ◽  
Rongchun Zhang ◽  
Manfred F. Buchroithner

The building is an indispensable part of human life which provides a place for people to live, study, work, and engage in various cultural and social activities. People are exposed to earthquakes, and damaged buildings caused by earthquakes are one of the main threats. It is essential to retrieve the detailed information of affected buildings after earthquakes. Very high-resolution satellite imagery plays a key role in retrieving building damage information since it captures imagery quickly and effectively after the disaster. In this paper, the pretrained Visual Geometry Group (VGG)Net model was applied for identifying collapsed buildings induced by the 2010 Haiti earthquake using pre- and post-event remotely sensed space imagery, and the fine-tuned pretrained VGGNet model was compared with the VGGNet model trained from scratch. The effects of dataset augmentation and freezing different intermediate layers were also explored. The experimental results demonstrated that the fine-tuned VGGNet model outperformed the VGGNet model trained from scratch with increasing overall accuracy (OA) from 83.38% to 85.19% and Kappa from 60.69% to 67.14%. By taking advantage of dataset augmentation, OA and Kappa went up to 88.83% and 75.33% respectively, and the collapsed buildings were better recognized with a larger producer accuracy of 86.31%. The present study showed the potential of using the pretrained Convolutional Neural Network (CNN) model to identify collapsed buildings caused by earthquakes using very high-resolution satellite imagery.


Drones ◽  
2020 ◽  
Vol 4 (3) ◽  
pp. 50
Author(s):  
Mary K. Bennett ◽  
Nicolas Younes ◽  
Karen Joyce

While coral reef ecosystems hold immense biological, ecological, and economic value, frequent anthropogenic and environmental disturbances have caused these ecosystems to decline globally. Current coral reef monitoring methods include in situ surveys and analyzing remotely sensed data from satellites. However, in situ methods are often expensive and inconsistent in terms of time and space. High-resolution satellite imagery can also be expensive to acquire and subject to environmental conditions that conceal target features. High-resolution imagery gathered from remotely piloted aircraft systems (RPAS or drones) is an inexpensive alternative; however, processing drone imagery for analysis is time-consuming and complex. This study presents the first semi-automatic workflow for drone image processing with Google Earth Engine (GEE) and free and open source software (FOSS). With this workflow, we processed 230 drone images of Heron Reef, Australia and classified coral, sand, and rock/dead coral substrates with the Random Forest classifier. Our classification achieved an overall accuracy of 86% and mapped live coral cover with 92% accuracy. The presented methods enable efficient processing of drone imagery of any environment and can be useful when processing drone imagery for calibrating and validating satellite imagery.


Sign in / Sign up

Export Citation Format

Share Document