scholarly journals BUILDING DETECTION FROM SAR IMAGES USING UNET DEEP LEARNING METHOD

Author(s):  
R. A. Emek ◽  
N. Demir

Abstract. SAR images are different from the optical images in terms of image properties with the values of scattering instead of reflectance. This makes SAR images difficult to apply the traditional object detection methodologies. In recent years, deep learning models are frequently used in segmentation and object detection purposes. In this study, we have investigated the potential of U-Net models for building detection from SAR and optical image fusion. The datasets used are Sentinel 1 SAR and Sentinel-2 multispectral images, provided from ‘SpaceNet 6 Multi Sensor All-Weather Mapping’ challenge. These images cover an area of 120 km2 in Rotterdam, the Netherlands. As training datasets 20 pieces of 900 by 900 pixel sized HV polarized and optical image patches have been used together. The calculated loss value is 0.4 and the accuracy is 81%.

2021 ◽  
Vol 13 (7) ◽  
pp. 1236
Author(s):  
Yuanjun Shu ◽  
Wei Li ◽  
Menglong Yang ◽  
Peng Cheng ◽  
Songchen Han

Convolutional neural networks (CNNs) have been widely used in change detection of synthetic aperture radar (SAR) images and have been proven to have better precision than traditional methods. A two-stage patch-based deep learning method with a label updating strategy is proposed in this paper. The initial label and mask are generated at the pre-classification stage. Then a two-stage updating strategy is applied to gradually recover changed areas. At the first stage, diversity of training data is gradually restored. The output of the designed CNN network is further processed to generate a new label and a new mask for the following learning iteration. As the diversity of data is ensured after the first stage, pixels within uncertain areas can be easily classified at the second stage. Experiment results on several representative datasets show the effectiveness of our proposed method compared with several existing competitive methods.


2020 ◽  
Author(s):  
Maria Nicolina Papa ◽  
Michael Nones ◽  
Carmela Cavallo ◽  
Massimiliano Gargiulo ◽  
Giuseppe Ruello

<p>Changes in fluvial morphology, such as the migration of channels and sandbars, are driven by many factors e.g. water, woody debris and sediment discharges, vegetation and management practice. Nowadays, increased anthropic pressure and climate change are accelerating the natural morphologic dynamics. Therefore, the monitoring of river changes and the assessment of future trends are necessary for the identification of the optimal management practices, aiming at the improvement of river ecological status and the mitigation of hydraulic risk. Satellite data can provide an effective and cost-effective tool for the monitoring of river morphology and its temporal evolution.</p><p>The main idea of this work is to understand which remote sensed data, and particularly which space and time resolutions, are more adapt for the observation of sandbars evolution in relatively large rivers. To this purpose, multispectral and Synthetic Aperture Radar (SAR) archive data, with different spatial resolution, were used. Preference was given to satellite data freely available. Moreover, the observations extracted by the satellite data were compared with ground data recorded by a fixed camera.</p><p>The study case is a sandy bar (area about 0.4 km<sup>2 </sup>and maximum width about 350 m) in a lowland reach of the Po River (Italy), characterized by frequent and relevant morphological changes. The bar shoreline changes were captured by a fixed video camera, installed on a bridge and operating for almost two years (July 2017 - November 2018). To this purpose, we used: Sentinel-2 multispectral images with a spatial resolution of 10 m, Sentinel-1 SAR images with a resolution of 5 x 20 m and CosmoSkyMed SAR images with a resolution of 5 m. It is worth noting that the Sentinel data of the Copernicus Programme are freely available while the CosmoSkyMed data of the Italian Space Agency (ASI) are freely distributed for scientific purpose after the successful participation to an open call. In order to validate the results provided by Sentinel and CosmoSkyMed data, we used very high resolution multispectral images (about 50 cm).</p><p>Multispectral images are easily interpreted, but are affected by the presence of cloud cover. For instance, in this analysis, the expendable multispectral images were equal to about 50% of the total archive. On the other hand, the SAR images provide information also in the presence of clouds and at night-time, but they have the drawback of more complex processing and interpretation. The shorelines extracted from the satellite images were compared with those extracted from photographic images, taken on the same day of the satellite acquisition. Other comparisons were made between different satellite images acquired with a temporal mismatch of maximum two days.</p><p>The results of the comparisons showed that the Sentinel-1 and Sentinel-2 data were both adequate for the shoreline changes observation. Due to the higher resolution, the CosmoSkyMed data provided better results. SAR data and multispectral data allowed for automatic extraction of the bar shoreline, with different degree of processing burden. The fusion of data from different satellites gave the opportunity of highly increase the sampling rate.</p>


2021 ◽  
Vol 13 (19) ◽  
pp. 3998
Author(s):  
Jianhao Gao ◽  
Yang Yi ◽  
Tang Wei ◽  
Haoguan Zhang

Publicly available optical remote sensing images from platforms such as Sentinel-2 satellites contribute much to the Earth observation and research tasks. However, information loss caused by clouds largely decreases the availability of usable optical images so reconstructing the missing information is important. Existing reconstruction methods can hardly reflect the real-time information because they mainly make use of multitemporal optical images as reference. To capture the real-time information in the cloud removal process, Synthetic Aperture Radar (SAR) images can serve as the reference images due to the cloud penetrability of SAR imaging. Nevertheless, large datasets are necessary because existing SAR-based cloud removal methods depend on network training. In this paper, we integrate the merits of multitemporal optical images and SAR images to the cloud removal process, the results of which can reflect the ground information change, in a simple convolution neural network. Although the proposed method is based on deep neural network, it can directly operate on the target image without training datasets. We conduct several simulation and real data experiments of cloud removal in Sentinel-2 images with multitemporal Sentinel-1 SAR images and Sentinel-2 optical images. Experiment results show that the proposed method outperforms those state-of-the-art multitemporal-based methods and overcomes the constraint of datasets of those SAR-based methods.


mSystems ◽  
2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Hao Jiang ◽  
Sen Li ◽  
Weihuang Liu ◽  
Hongjin Zheng ◽  
Jinghao Liu ◽  
...  

ABSTRACT Analyzing cells and tissues under a microscope is a cornerstone of biological research and clinical practice. However, the challenge faced by conventional microscopy image analysis is the fact that cell recognition through a microscope is still time-consuming and lacks both accuracy and consistency. Despite enormous progress in computer-aided microscopy cell detection, especially with recent deep-learning-based techniques, it is still difficult to translate an established method directly to a new cell target without extensive modification. The morphology of a cell is complex and highly varied, but it has long been known that cells show a nonrandom geometrical order in which a distinct and defined shape can be formed in a given type of cell. Thus, we have proposed a geometry-aware deep-learning method, geometric-feature spectrum ExtremeNet (GFS-ExtremeNet), for cell detection. GFS-ExtremeNet is built on the framework of ExtremeNet with a collection of geometric features, resulting in the accurate detection of any given cell target. We obtained promising detection results with microscopic images of publicly available mammalian cell nuclei and newly collected protozoa, whose cell shapes and sizes varied. Even more striking, our method was able to detect unicellular parasites within red blood cells without misdiagnosis of each other. IMPORTANCE Automated diagnostic microscopy powered by deep learning is useful, particularly in rural areas. However, there is no general method for object detection of different cells. In this study, we developed GFS-ExtremeNet, a geometry-aware deep-learning method which is based on the detection of four extreme key points for each object (topmost, bottommost, rightmost, and leftmost) and its center point. A postprocessing step, namely, adjacency spectrum, was employed to measure whether the distances between the key points were below a certain threshold for a particular cell candidate. Our newly proposed geometry-aware deep-learning method outperformed other conventional object detection methods and could be applied to any type of cell with a certain geometrical order. Our GFS-ExtremeNet approach opens a new window for the development of an automated cell detection system.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Zeyu Shangguan ◽  
Lingyu Wang ◽  
Jianquan Zhang ◽  
Wenbo Dong

The space motion control is an important issue on space robot, rendezvous and docking, small satellite formation, and some on-orbit services. The motion control needs robust object detection and high-precision object localization. Among many sensing systems such as laser radar, inertia sensors, and GPS navigation, vision-based navigation is more adaptive to noncontact applications in the close distance and in high-dynamic environment. In this work, a vision-based system serving for a free-floating robot inside the spacecraft is introduced, and the method to measure space body 6-DOF position-attitude is presented. At first, the deep-learning method is applied for robust object detection in the complex background, and after the object is navigated at the close distance, the reference marker is used for more precise matching and edge detection. After the accurate coordinates are gotten in the image sequence, the object space position and attitude are calculated by the geometry method and used for fine control. The experimental results show that the recognition method based on deep-learning at a distance and marker matching in close range effectively eliminates the false target recognition and improves the precision of positioning at the same time. The testing result shows the recognition accuracy rate is 99.8% and the localization precision is far less than 1% in 1.5 meters. The high-speed camera and embedded electronic platform driven by GPU are applied for accelerating the image processing speed so that the system works at best by 70 frames per second. The contribution of this work is to introduce the deep-learning method for precision motion control and in the meanwhile ensure both the robustness and real time of the system. It aims at making such vision-based system more practicable in the real-space applications.


Water ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 298
Author(s):  
Jiwen Tang ◽  
Damien Arvor ◽  
Thomas Corpetti ◽  
Ping Tang

Irrigation systems play an important role in agriculture. Center pivot irrigation systems are popular in many countries as they are labor-saving and water consumption efficient. Monitoring the distribution of center pivot irrigation systems can provide important information for agricultural production, water consumption and land use. Deep learning has become an effective method for image classification and object detection. In this paper, a new method to detect the precise shape of center pivot irrigation systems is proposed. The proposed method combines a lightweight real-time object detection network (PVANET) based on deep learning, an image classification model (GoogLeNet) and accurate shape detection (Hough transform) to detect and accurately delineate center pivot irrigation systems and their associated circular shape. PVANET is lightweight and fast and GoogLeNet can reduce the false detections associated with PVANET, while Hough transform can accurately detect the shape of center pivot irrigation systems. Experiments with Sentinel-2 images in Mato Grosso achieved a precision of 95% and a recall of 95.5%, which demonstrated the effectiveness of the proposed method. Finally, with the accurate shape of center pivot irrigation systems detected, the area of irrigation in the region was estimated.


Author(s):  
K. P. Martinez ◽  
D. F. M. Burgos ◽  
A. C. Blanco ◽  
S. G. Salmo III

Abstract. Leaf Area Index (LAI) is a quantity that characterizes canopy foliage content. As leaf surfaces are the primary sites of energy, mass exchange, and fundamental production of terrestrial ecosystem, many important processes are directly proportional to LAI. With this, LAI can be considered as an important parameter of plant growth. Multispectral optical images have been widely utilized for mangrove-related studies, such as LAI estimation. In Sentinel-2, for example, LAI can be estimated using a biophysical processor in SNAP or using various machine learning algorithms. However, multispectral optical images have disadvantages due to its weather-dependence and limited canopy penetration. In this study, a multi-sensor approach was implemented by using free multi-spectral optical images (Sentinel-2 ) and synthetic aperture radar (SAR) images (Sentinel-1) to perform Leaf Area Index (LAI) estimation. The use of SAR images can compensate for the above-mentioned disadvantages and it then can pave the way for regular mapping and assessment of LAI, despite any weather conditions and cloud cover. In this study, generation of LAI models that explores linear, non-linear and decision trees modelling algorithms to incorporate Sentinel-1 derivatives and Sentinel-2 LAI were executed. The Random Forest model have exhibited the most robust model having the lowest RMSE of 0.2845. This result poses a concrete relationship of a biophysical entity derived from optical parameters to RADAR derivatives to which opens the opportunity of integrating both systems to compensate each disadvantages and produce a more efficient quantification of LAI.


2019 ◽  
Vol 11 (12) ◽  
pp. 1444 ◽  
Author(s):  
Raveerat Jaturapitpornchai ◽  
Masashi Matsuoka ◽  
Naruo Kanemoto ◽  
Shigeki Kuzuoka ◽  
Riho Ito ◽  
...  

Remote sensing data can be utilized to help developing countries monitor the use of land. However, the problem of constant cloud coverage prevents us from taking full advantage of satellite optical images. Therefore, we instead opt to use data from synthetic-aperture radar (SAR), which can capture images of the Earth’s surface regardless of the weather conditions. In this study, we use SAR data to identify newly built constructions. Most studies on change detection tend to detect all of the changes that have a similar temporal change characteristic occurring on two occasions, while we want to identify only the constructions and avoid detecting other changes such as the seasonal change of vegetation. To do so, we study various deep learning network techniques and have decided to propose the fully convolutional network with a skip connection. We train this network with pairs of SAR data acquired on two different occasions from Bangkok and the ground truth, which we manually create from optical images available from Google Earth for all of the SAR pairs. Experiments to assign the most suitable patch size, loss weighting, and epoch number to the network are discussed in this paper. The trained model can be used to generate a binary map that indicates the position of these newly built constructions precisely with the Bangkok dataset, as well as with the Hanoi and Xiamen datasets with acceptable results. The proposed model can even be used with SAR images of the same specific satellite from another orbit direction and still give promising results.


Sign in / Sign up

Export Citation Format

Share Document