scholarly journals An Improved Boosting Learning Saliency Method for Built-Up Areas Extraction in Sentinel-2 Images

2018 ◽  
Vol 10 (12) ◽  
pp. 1863 ◽  
Author(s):  
Zhenhui Sun ◽  
Qingyan Meng ◽  
Weifeng Zhai

Built-up areas extraction from satellite images is an important aspect of urban planning and land use; however, this remains a challenging task when using optical satellite images. Existing methods may be limited because of the complex background. In this paper, an improved boosting learning saliency method for built-up area extraction from Sentinel-2 images is proposed. First, the optimal band combination for extracting such areas from Sentinel-2 data is determined; then, a coarse saliency map is generated, based on multiple cues and the geodesic weighted Bayesian (GWB) model, that provides training samples for a strong model; a refined saliency map is subsequently obtained using the strong model. Furthermore, cuboid cellular automata (CCA) is used to integrate multiscale saliency maps for improving the refined saliency map. Then, coarse and refined saliency maps are synthesized to create a final saliency map. Finally, the fractional-order Darwinian particle swarm optimization algorithm (FODPSO) is employed to extract the built-up areas from the final saliency result. Cities in five different types of ecosystems in China (desert, coastal, riverside, valley, and plain) are used to evaluate the proposed method. Analyses of results and comparative analyses with other methods suggest that the proposed method is robust, with good accuracy.

2021 ◽  
Vol 13 (8) ◽  
pp. 1505
Author(s):  
Klaudia Kryniecka ◽  
Artur Magnuszewski

The lower Vistula River was regulated in the years 1856–1878, at a distance of 718–939 km. The regulation plan did not take into consideration the large transport of the bed load. The channel was shaped using simplified geometry—too wide for the low flow and overly straight for the stabilization of the sandbar movement. The hydraulic parameters of the lower Vistula River show high velocities of flow and high shear stress. The movement of the alternate sandbars can be traced on the optical satellite images of Sentinel-2. In this study, a method of sandbar detection through the remote sensing indices, Sentinel Water Mask (SWM) and Automated Water Extraction Index no shadow (AWEInsh), and the manual delineation with visual interpretation (MD) was used on satellite images of the lower Vistula River, recorded at the time of low flows (20 August 2015, 4 September 2016, 30 July 2017, 20 September 2018, and 29 August 2019). The comparison of 32 alternate sandbar areas obtained by SWM, AWEInsh, and MD manual delineation methods on the Sentinel-2 images, recorded on 20 August 2015, was performed by the statistical analysis of the interclass correlation coefficient (ICC). The distance of the shift in the analyzed time intervals between the image registration dates depends on the value of the mean discharge (MQ). The period from 30 July 2017 to 20 September 2018 was wet (MQ = 1140 m3 × s−1) and created conditions for the largest average distance of the alternate sandbar shift, from 509 to 548 m. The velocity of movement, calculated as an average shift for one day, was between 1.2 and 1.3 m × day−1. The smallest shift of alternate sandbars was characteristic of the low flow period from 20 August 2015 to 4 September 2016 (MQ = 306 m3 × s−1), from 279 to 310 m, with an average velocity from 0.7 to 0.8 m × day−1.


Author(s):  
Liming Li ◽  
Xiaodong Chai ◽  
Shuguang Zhao ◽  
Shubin Zheng ◽  
Shengchao Su

This paper proposes an effective method to elevate the performance of saliency detection via iterative bootstrap learning, which consists of two tasks including saliency optimization and saliency integration. Specifically, first, multiscale segmentation and feature extraction are performed on the input image successively. Second, prior saliency maps are generated using existing saliency models, which are used to generate the initial saliency map. Third, prior maps are fed into the saliency regressor together, where training samples are collected from the prior maps at multiple scales and the random forest regressor is learned from such training data. An integration of the initial saliency map and the output of saliency regressor is deployed to generate the coarse saliency map. Finally, in order to improve the quality of saliency map further, both initial and coarse saliency maps are fed into the saliency regressor together, and then the output of the saliency regressor, the initial saliency map as well as the coarse saliency map are integrated into the final saliency map. Experimental results on three public data sets demonstrate that the proposed method consistently achieves the best performance and significant improvement can be obtained when applying our method to existing saliency models.


2020 ◽  
Vol 7 (2) ◽  
pp. 1001-1008
Author(s):  
Ngozi Chizoma Umelo-Ibemere

Agricultural monitoring has become an absolute necessity in the Sahel countries, especially with climate change which constitutes a real threat for this sector. The aim of this work is to develop a methodology for identifying crops and mapping agricultural areas using Sentinel-2 data from the Copernicus program. The purpose of this work consisted in discriminating the crops of millet, maize and peanuts. This is to analyse the scientific and technical obstacles related to this problem. For this, we have made a mathematical analysis of optical satellite images. High temporal and spatial resolution images (10m to 60m) of Sentinel 2 sensors were used in this work. This unique set of data coupled with field data, has permitted to carry out a diagnosis of land cover and cultivated land surfaces, and evaluating the contribution of this type of data for crop forecast


2021 ◽  
Vol 13 (16) ◽  
pp. 3319
Author(s):  
Nan Ma ◽  
Lin Sun ◽  
Chenghu Zhou ◽  
Yawen He

Automatic cloud detection in remote sensing images is of great significance. Deep-learning-based methods can achieve cloud detection with high accuracy; however, network training heavily relies on a large number of labels. Manually labelling pixel-wise level cloud and non-cloud annotations for many remote sensing images is laborious and requires expert-level knowledge. Different types of satellite images cannot share a set of training data, due to the difference in spectral range and spatial resolution between them. Hence, labelled samples in each upcoming satellite image are required to train a new deep-learning-based model. In order to overcome such a limitation, a novel cloud detection algorithm based on a spectral library and convolutional neural network (CD-SLCNN) was proposed in this paper. In this method, the residual learning and one-dimensional CNN (Res-1D-CNN) was used to accurately capture the spectral information of the pixels based on the prior spectral library, effectively preventing errors due to the uncertainties in thin clouds, broken clouds, and clear-sky pixels during remote sensing interpretation. Benefiting from data simulation, the method is suitable for the cloud detection of different types of multispectral data. A total of 62 Landsat-8 Operational Land Imagers (OLI), 25 Moderate Resolution Imaging Spectroradiometers (MODIS), and 20 Sentinel-2 satellite images acquired at different times and over different types of underlying surfaces, such as a high vegetation coverage, urban area, bare soil, water, and mountains, were used for cloud detection validation and quantitative analysis, and the cloud detection results were compared with the results from the function of the mask, MODIS cloud mask, support vector machine, and random forest. The comparison revealed that the CD-SLCNN method achieved the best performance, with a higher overall accuracy (95.6%, 95.36%, 94.27%) and mean intersection over union (77.82%, 77.94%, 77.23%) on the Landsat-8 OLI, MODIS, and Sentinel-2 data, respectively. The CD-SLCNN algorithm produced consistent results with a more accurate cloud contour on thick, thin, and broken clouds over a diverse underlying surface, and had a stable performance regarding bright surfaces, such as buildings, ice, and snow.


2022 ◽  
Vol 15 ◽  
Author(s):  
Ying Yu ◽  
Jun Qian ◽  
Qinglong Wu

This article proposes a bottom-up visual saliency model that uses the wavelet transform to conduct multiscale analysis and computation in the frequency domain. First, we compute the multiscale magnitude spectra by performing a wavelet transform to decompose the magnitude spectrum of the discrete cosine coefficients of an input image. Next, we obtain multiple saliency maps of different spatial scales through an inverse transformation from the frequency domain to the spatial domain, which utilizes the discrete cosine magnitude spectra after multiscale wavelet decomposition. Then, we employ an evaluation function to automatically select the two best multiscale saliency maps. A final saliency map is generated via an adaptive integration of the two selected multiscale saliency maps. The proposed model is fast, efficient, and can simultaneously detect salient regions or objects of different sizes. It outperforms state-of-the-art bottom-up saliency approaches in the experiments of psychophysical consistency, eye fixation prediction, and saliency detection for natural images. In addition, the proposed model is applied to automatic ship detection in optical satellite images. Ship detection tests on satellite data of visual optical spectrum not only demonstrate our saliency model's effectiveness in detecting small and large salient targets but also verify its robustness against various sea background disturbances.


2019 ◽  
Vol 11 (18) ◽  
pp. 2184 ◽  
Author(s):  
Baik ◽  
Son ◽  
Kim

On 15 November 2017, liquefaction phenomena were observed around the epicenter after a 5.4 magnitude earthquake occurred in Pohang in southeast Korea. In this study, we attempted to detect areas of sudden water content increase by using SAR (synthetic aperture radar) and optical satellite images. We analyzed coherence changes using Sentinel-1 SAR coseismic image pairs and analyzed NDWI (normalized difference water index) changes using Landsat 8 and Sentinel-2 optical satellite images from before and after the earthquake. Coherence analysis showed no liquefaction-induced surface changes. The NDWI time series analysis models using Landsat 8 and Sentinel-2 optical images confirmed liquefaction phenomena close to the epicenter but could not detect liquefaction phenomena far from the epicenter. We proposed and evaluated the TDLI (temporal difference liquefaction index), which uses only one SWIR (short-wave infrared) band at 2200 nm, which is sensitive to soil moisture content. The Sentinel-2 TDLI was most consistent with field observations where sand blow from liquefaction was confirmed. We found that Sentinel-2, with its relatively shorter revisit period compared to that of Landsat 8 (5 days vs. 16 days), was more effective for detecting traces of short-lived liquefaction phenomena on the surface. The Sentinel-2 TDLI could help facilitate rapid investigations and responses to liquefaction damage.


2020 ◽  
Vol 7 (2) ◽  
pp. 1009-1025
Author(s):  
Gayane Faye ◽  
Mamadou Mbaye ◽  
Modou Mbaye ◽  
Abdou Kâ Diongue

Agricultural monitoring has become an absolute necessity in the Sahel countries, especially with climate change which constitutes a real threat for this sector. The aim of this work is to develop a methodology for identifying crops and mapping agricultural areas using Sentinel-2 data from the Copernicus program. The purpose of this work consisted in discriminating the crops of millet, maize and peanuts. This is to analyse the scientific and technical obstacles related to this problem. For this, we have made a mathematical analysis of optical satellite images. High temporal and spatial resolution images (10m to 60m) of Sentinel 2 sensors were used in this work. This unique set of data coupled with field data, has permitted to carry out a diagnosis of land cover and cultivated land surfaces, and evaluating the contribution of this type of data for crop forecast


2012 ◽  
Vol E95.B (5) ◽  
pp. 1890-1893
Author(s):  
Wang LUO ◽  
Hongliang LI ◽  
Guanghui LIU ◽  
Guan GUI

2021 ◽  
Vol 11 (14) ◽  
pp. 6269
Author(s):  
Wang Jing ◽  
Wang Leqi ◽  
Han Yanling ◽  
Zhang Yun ◽  
Zhou Ruyan

For the fast detection and recognition of apple fruit targets, based on the real-time DeepSnake deep learning instance segmentation model, this paper provided an algorithm basis for the practical application and promotion of apple picking robots. Since the initial detection results have an important impact on the subsequent edge prediction, this paper proposed an automatic detection method for apple fruit targets in natural environments based on saliency detection and traditional color difference methods. Combined with the original image, the histogram backprojection algorithm was used to further optimize the salient image results. A dynamic adaptive overlapping target separation algorithm was proposed to locate the single target fruit and further to determine the initial contour for DeepSnake, in view of the possible overlapping fruit regions in the saliency map. Finally, the target fruit was labeled based on the segmentation results of the examples. In the experiment, 300 training datasets were used to train the DeepSnake model, and the self-built dataset containing 1036 pictures of apples in various situations under natural environment was tested. The detection accuracy of target fruits under non-overlapping shaded fruits, overlapping fruits, shaded branches and leaves, and poor illumination conditions were 99.12%, 94.78%, 90.71%, and 94.46% respectively. The comprehensive detection accuracy was 95.66%, and the average processing time was 0.42 s in 1036 test images, which showed that the proposed algorithm can effectively separate the overlapping fruits through a not-very-large training samples and realize the rapid and accurate detection of apple targets.


Author(s):  
Samuel Humphries ◽  
Trevor Parker ◽  
Bryan Jonas ◽  
Bryan Adams ◽  
Nicholas J Clark

Quick identification of building and roads is critical for execution of tactical US military operations in an urban environment. To this end, a gridded, referenced, satellite images of an objective, often referred to as a gridded reference graphic or GRG, has become a standard product developed during intelligence preparation of the environment. At present, operational units identify key infrastructure by hand through the work of individual intelligence officers. Recent advances in Convolutional Neural Networks, however, allows for this process to be streamlined through the use of object detection algorithms. In this paper, we describe an object detection algorithm designed to quickly identify and label both buildings and road intersections present in an image. Our work leverages both the U-Net architecture as well the SpaceNet data corpus to produce an algorithm that accurately identifies a large breadth of buildings and different types of roads. In addition to predicting buildings and roads, our model numerically labels each building by means of a contour finding algorithm. Most importantly, the dual U-Net model is capable of predicting buildings and roads on a diverse set of test images and using these predictions to produce clean GRGs.


Sign in / Sign up

Export Citation Format

Share Document