On possibility of utilization of side viewing radars for research of homogeneity of aerospace test plots

2017 ◽  
Vol 919 (1) ◽  
pp. 45-47
Author(s):  
I.D. Abdurrahmanova

One of major properties of territories, designated for construction of aerospace test plots is homogeneity. In order to control the property of test plots homogeneity the statistic of Getis is widely used as its local indicator. If pixels on the object and its neighbors have similar high values the Getis statistics obtain a high value and vice versa a low one. In the article the process of reception the radars’ signal in the plane-parallel model of meteorological object is considered. The surface of the test plot may be taken as such a model. The condition of implementing a side viewing radar for obtaining the remote sensing data on rectangular grid for control of test plot on homogeneity using the Getis statistics is determined. It is shown that by selection of appropriate angle of slope the information from the test plot can be received by equal steps. In this case the homogeneity property of test plots are to be determined by help of Getis statistics on the basis of remote sensing data obtained on rectangular grid. The possibility for utilization of side viewing radars for obtaining the remote sensing data from rectangular grid at the test plot surface for further application of Getis statistics is researched.

2017 ◽  
Vol 21 (9) ◽  
pp. 4747-4765 ◽  
Author(s):  
Clara Linés ◽  
Micha Werner ◽  
Wim Bastiaanssen

Abstract. The implementation of drought management plans contributes to reduce the wide range of adverse impacts caused by water shortage. A crucial element of the development of drought management plans is the selection of appropriate indicators and their associated thresholds to detect drought events and monitor the evolution. Drought indicators should be able to detect emerging drought processes that will lead to impacts with sufficient anticipation to allow measures to be undertaken effectively. However, in the selection of appropriate drought indicators, the connection to the final impacts is often disregarded. This paper explores the utility of remotely sensed data sets to detect early stages of drought at the river basin scale and determine how much time can be gained to inform operational land and water management practices. Six different remote sensing data sets with different spectral origins and measurement frequencies are considered, complemented by a group of classical in situ hydrologic indicators. Their predictive power to detect past drought events is tested in the Ebro Basin. Qualitative (binary information based on media records) and quantitative (crop yields) data of drought events and impacts spanning a period of 12 years are used as a benchmark in the analysis. Results show that early signs of drought impacts can be detected up to 6 months before impacts are reported in newspapers, with the best correlation–anticipation relationships for the standard precipitation index (SPI), the normalised difference vegetation index (NDVI) and evapotranspiration (ET). Soil moisture (SM) and land surface temperature (LST) offer also good anticipation but with weaker correlations, while gross primary production (GPP) presents moderate positive correlations only for some of the rain-fed areas. Although classical hydrological information from water levels and water flows provided better anticipation than remote sensing indicators in most of the areas, correlations were found to be weaker. The indicators show a consistent behaviour with respect to the different levels of crop yield in rain-fed areas among the analysed years, with SPI, NDVI and ET providing again the stronger correlations. Overall, the results confirm remote sensing products' ability to anticipate reported drought impacts and therefore appear as a useful source of information to support drought management decisions.


PeerJ ◽  
2019 ◽  
Vol 6 ◽  
pp. e6227 ◽  
Author(s):  
Michele Dalponte ◽  
Lorenzo Frizzera ◽  
Damiano Gianelle

An international data science challenge, called National Ecological Observatory Network—National Institute of Standards and Technology data science evaluation, was set up in autumn 2017 with the goal to improve the use of remote sensing data in ecological applications. The competition was divided into three tasks: (1) individual tree crown (ITC) delineation, for identifying the location and size of individual trees; (2) alignment between field surveyed trees and ITCs delineated on remote sensing data; and (3) tree species classification. In this paper, the methods and results of team Fondazione Edmund Mach (FEM) are presented. The ITC delineation (Task 1 of the challenge) was done using a region growing method applied to a near-infrared band of the hyperspectral images. The optimization of the parameters of the delineation algorithm was done in a supervised way on the basis of the Jaccard score using the training set provided by the organizers. The alignment (Task 2) between the delineated ITCs and the field surveyed trees was done using the Euclidean distance among the position, the height, and the crown radius of the ITCs and the field surveyed trees. The classification (Task 3) was performed using a support vector machine classifier applied to a selection of the hyperspectral bands and the canopy height model. The selection of the bands was done using the sequential forward floating selection method and the Jeffries Matusita distance. The results of the three tasks were very promising: team FEM ranked first in the data science competition in Task 1 and 2, and second in Task 3. The Jaccard score of the delineated crowns was 0.3402, and the results showed that the proposed approach delineated both small and large crowns. The alignment was correctly done for all the test samples. The classification results were good (overall accuracy of 88.1%, kappa accuracy of 75.7%, and mean class accuracy of 61.5%), although the accuracy was biased toward the most represented species.


2021 ◽  
Vol 4 (1) ◽  
pp. 10-19
Author(s):  
Alexey V. Kutuzov

Waterfalls are specific hydrological and hydrobiological objects that often serve as the natural obstacles for spreading of aquatic animal species, resulting as discontinuous range of these species. Waterfalls and rapids create special habitats for riverine aquatic organisms and cause microclimatic changes along the coast. The areas of the largest waterfalls in Ethiopia, high-mountainous Jinbar Waterfall and low-mountainous Blue Nile Falls (Tis Abay,Tis Issat), were considered as model. Up-to-date remote sensing and GIS for processing and storing satellite and field data makes it possible to identify new waterfalls and rapids, to correct and to supply the existing literature and cartographic data. ERS data obtained from the modern satellite Sentinel-2, designed to monitor the state of the environment, as well as data from radar satellite imagery (SRTM) were used mainly. Based on the results of the analysis of cartographic materials and remote sensing data, the localization of a number of large waterfalls and rapids on the rivers of the Ethiopian Highlands was determined and the parameters for the selection of remote sensing data were established. Images with a spatial resolution of 10–15 m/pixel and higher are suitable for detecting significant waterfalls (more than 30-m wide). According to the present study, the identifying the waterfall zones by the methods of GIS analysis of topographic maps at a scale of 1:200000 and larger, as well as from satellite topographic data is possible.


2017 ◽  
Author(s):  
Clara Linés ◽  
Micha Werner ◽  
Wim Bastiaanssen

Abstract. The implementation of drought management plans contributes to reduce the wide range of adverse impacts caused by water shortage. A crucial element of the development of drought management plans is the selection of appropriate indicators and their associated thresholds to detect drought events and monitor their evolution. Drought indicators should be able to detect emerging drought processes that will lead to impacts with sufficient anticipation to allow measures to be undertaken effectively. However, in the selection of appropriate drought indicators the connection to the final impacts is often disregarded. This paper explores the utility of remotely sensed data sets to detect early stages of drought at the river basin scale, and how much time can be gained to inform operational land and water management practices. Six different remote sensing data sets with different spectral origin and measurement frequency are considered, complemented by a group of classical in situ hydrologic indicators. Their predictive power to detect past drought events is tested in the Ebro basin. Qualitative (binary information based on media records) and quantitative (crop yields) data of drought events and impacts spanning a period of 12 years are used as a benchmark in the analysis. Results show that early signs of drought impacts can be detected up to some 6 months before impacts are reported in newspapers, with the best correlation-anticipation relationships for the Standard Precipitation Index (SPI), the Normalized Difference Vegetation Index (NDVI) and Evapotranspiration (ET). Soil Moisture (SM) and Land Surface Temperature (LST) offer also good anticipation, but with weaker correlations, while Gross Primary Production (GPP) presents moderate positive correlations only for some of the rainfed areas. Although classical hydrological information from water levels and water flows provided better anticipation than remote sensing indicators in most of the areas, correlations were found to be weaker. The indicators show a consistent behaviour with respect to the different levels of crop yield in rainfed areas among the analysed years, with SPI, NDVI and ET providing again the stronger correlations. Overall, the results confirm remote sensing products’ ability to anticipate reported drought impacts and therefore appear as a useful source of information to support drought management decisions.


2016 ◽  
Vol 31 (9) ◽  
pp. 1919-1937 ◽  
Author(s):  
Nica Huber ◽  
Felix Kienast ◽  
Christian Ginzler ◽  
Gilberto Pasinelli

2021 ◽  
Vol 13 (1) ◽  
pp. 155
Author(s):  
Dmitry I. Rukhovich ◽  
Polina V. Koroleva ◽  
Danila D. Rukhovich ◽  
Natalia V. Kalinina

Soil degradation processes are widespread on agricultural land. Ground-based methods for detecting degradation require a lot of labor and time. Remote methods based on the analysis of vegetation indices can significantly reduce the volume of ground surveys. Currently, machine learning methods are increasingly being used to analyze remote sensing data. In this paper, the task is set to apply deep machine learning methods and methods of vegetation indices calculation to automate the detection of areas of soil degradation development on arable land. In the course of the work, a method was developed for determining the location of degraded areas of soil cover on arable fields. The method is based on the use of multi-temporal remote sensing data. The selection of suitable remote sensing data scenes is based on deep machine learning. Deep machine learning was based on an analysis of 1028 scenes of Landsats 4, 5, 7 and 8 on 530 agricultural fields. Landsat data from 1984 to 2019 was analyzed. Dataset was created manually for each pair of “Landsat scene”/“agricultural field number”(for each agricultural field, the suitability of each Landsat scene was assessed). Areas of soil degradation were calculated based on the frequency of occurrence of low NDVI values over 35 years. Low NDVI values were calculated separately for each suitable fragment of the satellite image within the boundaries of each agricultural field. NDVI values of one-third of the field area and lower than the other two-thirds were considered low. During testing, the method gave 12.5% of type I errors (false positive) and 3.8% of type II errors (false negative). Independent verification of the method was carried out on six agricultural fields on an area of 713.3 hectares. Humus content and thickness of the humus horizon were determined in 42 ground-based points. In arable land degradation areas identified by the proposed method, the probability of detecting soil degradation by field methods was 87.5%. The probability of detecting soil degradation by ground-based methods outside the predicted regions was 3.8%. The results indicate that deep machine learning is feasible for remote sensing data selection based on a binary dataset. This eliminates the need for intermediate filtering systems in the selection of satellite imagery (determination of clouds, shadows from clouds, open soil surface, etc.). Direct selection of Landsat scenes suitable for calculations has been made. It allows automating the process of constructing soil degradation maps.


Sign in / Sign up

Export Citation Format

Share Document