DEVELOPMENT OF AN OPEN SOURCE, MACHINE LEARNING BASED TOOLSET FOR THE IDENTIFICATION OF DIKES IN SATELLITE IMAGES THROUGH SEMANTIC SEGMENTATION

2020 ◽  
Author(s):  
Ryan Gray ◽  
◽  
Tushar Mittal
2020 ◽  
Vol 12 (21) ◽  
pp. 3555
Author(s):  
Manu Tom ◽  
Rajanie Prabha ◽  
Tianyu Wu ◽  
Emmanuel Baltsavias ◽  
Laura Leal-Taixé ◽  
...  

Continuous observation of climate indicators, such as trends in lake freezing, is important to understand the dynamics of the local and global climate system. Consequently, lake ice has been included among the Essential Climate Variables (ECVs) of the Global Climate Observing System (GCOS), and there is a need to set up operational monitoring capabilities. Multi-temporal satellite images and publicly available webcam streams are among the viable data sources capable of monitoring lake ice. In this work we investigate machine learning-based image analysis as a tool to determine the spatio-temporal extent of ice on Swiss Alpine lakes as well as the ice-on and ice-off dates, from both multispectral optical satellite images (VIIRS and MODIS) and RGB webcam images. We model lake ice monitoring as a pixel-wise semantic segmentation problem, i.e., each pixel on the lake surface is classified to obtain a spatially explicit map of ice cover. We show experimentally that the proposed system produces consistently good results when tested on data from multiple winters and lakes. Our satellite-based method obtains mean Intersection-over-Union (mIoU) scores > 93%, for both sensors. It also generalises well across lakes and winters with mIoU scores > 78% and >80% respectively. On average, our webcam approach achieves mIoU values of ≈87% and generalisation scores of ≈71% and ≈69% across different cameras and winters respectively. Additionally, we generate and make available a new benchmark dataset of webcam images (Photi-LakeIce) which includes data from two winters and three cameras.


2021 ◽  
Vol 63 (12) ◽  
pp. 950-958
Author(s):  
L. P. Bass ◽  
Yu. A. Plastinin ◽  
I. Yu. Skryabysheva

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
pp. 100057
Author(s):  
Peiran Li ◽  
Haoran Zhang ◽  
Zhiling Guo ◽  
Suxing Lyu ◽  
Jinyu Chen ◽  
...  

Author(s):  
Md. Saif Hassan Onim ◽  
Aiman Rafeed Bin Ehtesham ◽  
Amreen Anbar ◽  
A. K. M. Nazrul Islam ◽  
A. K. M. Mahbubur Rahman

2020 ◽  
Vol 53 (5) ◽  
pp. 704-709
Author(s):  
Yan Liu ◽  
Zhijing Ling ◽  
Boyu Huo ◽  
Boqian Wang ◽  
Tianen Chen ◽  
...  
Keyword(s):  

Forests ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 66
Author(s):  
Kirill A. Korznikov ◽  
Dmitry E. Kislov ◽  
Jan Altman ◽  
Jiří Doležal ◽  
Anna S. Vozmishcheva ◽  
...  

Very high resolution satellite imageries provide an excellent foundation for precise mapping of plant communities and even single plants. We aim to perform individual tree recognition on the basis of very high resolution RGB (red, green, blue) satellite images using deep learning approaches for northern temperate mixed forests in the Primorsky Region of the Russian Far East. We used a pansharpened satellite RGB image by GeoEye-1 with a spatial resolution of 0.46 m/pixel, obtained in late April 2019. We parametrized the standard U-Net convolutional neural network (CNN) and trained it in manually delineated satellite images to solve the satellite image segmentation problem. For comparison purposes, we also applied standard pixel-based classification algorithms, such as random forest, k-nearest neighbor classifier, naive Bayes classifier, and quadratic discrimination. Pattern-specific features based on grey level co-occurrence matrices (GLCM) were computed to improve the recognition ability of standard machine learning methods. The U-Net-like CNN allowed us to obtain precise recognition of Mongolian poplar (Populus suaveolens Fisch. ex Loudon s.l.) and evergreen coniferous trees (Abies holophylla Maxim., Pinus koraiensis Siebold & Zucc.). We were able to distinguish species belonging to either poplar or coniferous groups but were unable to separate species within the same group (i.e. A. holophylla and P. koraiensis were not distinguishable). The accuracy of recognition was estimated by several metrics and exceeded values obtained for standard machine learning approaches. In contrast to pixel-based recognition algorithms, the U-Net-like CNN does not lead to an increase in false-positive decisions when facing green-colored objects that are similar to trees. By means of U-Net-like CNN, we obtained a mean accuracy score of up to 0.96 in our computational experiments. The U-Net-like CNN recognizes tree crowns not as a set of pixels with known RGB intensities but as spatial objects with a specific geometry and pattern. This CNN’s specific feature excludes misclassifications related to objects of similar colors as objects of interest. We highlight that utilization of satellite images obtained within the suitable phenological season is of high importance for successful tree recognition. The suitability of the phenological season is conceptualized as a group of conditions providing highlighting objects of interest over other components of vegetation cover. In our case, the use of satellite images captured in mid-spring allowed us to recognize evergreen fir and pine trees as the first class of objects (“conifers”) and poplars as the second class, which were in a leafless state among other deciduous tree species.


Sign in / Sign up

Export Citation Format

Share Document