Semantic Segmentation of Static and Dynamic Structures of Marina Satellite Images using Deep Learning

Author(s):  
Matheus M. dos Santos ◽  
Giovanni G. De Giacomo ◽  
Paulo L. J. Drews ◽  
Silvia S.C. Botelho
2021 ◽  
Vol 13 (19) ◽  
pp. 3836
Author(s):  
Clément Dechesne ◽  
Pierre Lassalle ◽  
Sébastien Lefèvre

In recent years, numerous deep learning techniques have been proposed to tackle the semantic segmentation of aerial and satellite images, increase trust in the leaderboards of main scientific contests and represent the current state-of-the-art. Nevertheless, despite their promising results, these state-of-the-art techniques are still unable to provide results with the level of accuracy sought in real applications, i.e., in operational settings. Thus, it is mandatory to qualify these segmentation results and estimate the uncertainty brought about by a deep network. In this work, we address uncertainty estimations in semantic segmentation. To do this, we relied on a Bayesian deep learning method, based on Monte Carlo Dropout, which allows us to derive uncertainty metrics along with the semantic segmentation. Built on the most widespread U-Net architecture, our model achieves semantic segmentation with high accuracy on several state-of-the-art datasets. More importantly, uncertainty maps are also derived from our model. While they allow for the performance of a sounder qualitative evaluation of the segmentation results, they also include valuable information to improve the reference databases.


Author(s):  
C. Najjaj ◽  
H. Rhinane ◽  
A. Hilali

Abstract. Researchers in computer vision and machine learning are becoming increasingly interested in image semantic segmentation. Many methods based on convolutional neural networks (CNNs) have been proposed and have made considerable progress in the building extraction mission. This other methods can result in suboptimal segmentation outcomes. Recently, to extract buildings with a great precision, we propose a model which can recognize all the buildings and present them in mask with white and the other classes in black. This developed network, which is based on U-Net, will boost the model's sensitivity. This paper provides a deep learning approach for building detection on satellite imagery applied in Casablanca city, Firstly, to begin we describe the terminology of this field. Next, the main datasets exposed in this project which’s 1000 satellite imagery. Then, we train the model UNET for 25 epochs on the training and validation datasets and testing the pretrained weight model with some unseen satellite images. Finally, the experimental results show that the proposed model offers good performance obtained as a binary mask that extract all the buildings in the region of Casablanca with a higher accuracy and entirety to achieve an average F1 score on test data of 0.91.


2021 ◽  
Author(s):  
◽  
Rostyslav-Mykola Tsenov

In recent years, a lot of remote sensing problems benefited from the improvements made in deep learning. In particular, deep learning semantic segmentation algorithms have provided improved frameworks for the automated production of land-use and land-cover (LULC) map generation. Automation of LULC map production can significantly increase its production frequency, which provides a great benefit to areas such as natural resource management, wildlife habitat protection, urban expansion, damage delineation, etc. In this thesis, many different convolutional neural networks (CNN) were examined in combination with various state-of-the-art semantic segmentation methods and extensions to improve the accuracy of predicted LULC maps. Most of the experiments were carried out using Landsat 5/7 and Landsat 8 satellite images. Additionally, unsupervised domain adaption (UDA) architectures were explored to transfer knowledge extracted from a labelled Landsat 8 dataset to unlabelled Sentinel-2 satellite images. The performance of various CNN and extension combinations were carefully assessed, where VGGNet with an output stride of 4, and modified U-Net architecture provided the best results. Additionally, an expanded analysis of the generated LULC maps for various sensors was provided. The contributions of this thesis are accurate automated LULC maps predictions that achieved ~92.4% of accuracy using deep neural networks; production of the model trained on the larger area, which is six times the size from the previous work, for both 8-bit Landsat 5/7, and 16-bit Landsat 8 sensors; and generation of the network architecture to produce LULC maps for the unlabelled 12-bit Sentinel-2 data with the knowledge extracted from the labelled Landsat 8 data.


2021 ◽  
pp. 498-509
Author(s):  
Tahmid Hasan Pranto ◽  
Abdulla All Noman ◽  
Asaduzzaman Noor ◽  
Ummeh Habiba Deepty ◽  
Rashedur M. Rahman

Author(s):  
Kuldeep Chaurasia ◽  
Rijul Nandy ◽  
Omkar Pawar ◽  
Ravi Ranjan Singh ◽  
Meghana Ahire

2021 ◽  
Vol 13 (14) ◽  
pp. 2723
Author(s):  
Naisen Yang ◽  
Hong Tang

Satellite images are always partitioned into regular patches with smaller sizes and then individually fed into deep neural networks (DNNs) for semantic segmentation. The underlying assumption is that these images are independent of one another in terms of geographic spatial information. However, it is well known that many land-cover or land-use categories share common regional characteristics within a certain spatial scale. For example, the style of buildings may change from one city or country to another. In this paper, we explore some deep learning approaches integrated with geospatial hash codes to improve the semantic segmentation results of satellite images. Specifically, the geographic coordinates of satellite images are encoded into a string of binary codes using the geohash method. Then, the binary codes of the geographic coordinates are fed into the deep neural network using three different methods in order to enhance the semantic segmentation ability of the deep neural network for satellite images. Experiments on three datasets demonstrate the effectiveness of embedding geographic coordinates into the neural networks. Our method yields a significant improvement over previous methods that do not use geospatial information.


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


Sign in / Sign up

Export Citation Format

Share Document