scholarly journals STABILIZATION OF SENTINEL-1 SAR TIME-SERIES USING CLIMATE AND FOREST STRUCTURE DATA FOR EARLY TROPICAL DEFORESTATION DETECTION

Author(s):  
J. Doblas ◽  
A. Carneiro ◽  
Y. Shimabukuro ◽  
S. Sant’Anna ◽  
L. Aragão ◽  
...  

Abstract. In this study we analyse the factors of variability of Sentinel-1 C-band radar backscattering over tropical rainforests, and propose a method to reduce the effects of this variability on deforestation detection algorithms. To do so, we developed a random forest regression model that relates Sentinel-1 gamma nought values with local climatological data and forest structure information. The model was trained using long time-series of 26 relevant variables, sampled over 6 undisturbed tropical forests areas. The resulting model explained 71.64% and 73.28% of the SAR signal variability for VV and VH polarizations, respectively. Once the best model for every polarization was selected, it was used to stabilize extracted pixel-level data of forested and non-deforested areas, which resulted on a 10 to 14% reduction of time-series variability, in terms of standard deviation. Then a statistically robust deforestation detection algorithm was applied to the stabilized time-series. The results show that the proposed method reduced the rate of false positives on both polarizations, especially on VV (from 21% to 2%, α=0.01). Meanwhile, the omission errors increased on both polarizations (from 27% to 37% in VV and from 27% to 33% on VV, α=0.01). The proposed method yielded slightly better results when compared with an alternative state-of-the-art approach (spatial normalization).

2019 ◽  
Vol 623 ◽  
pp. A39 ◽  
Author(s):  
Michael Hippke ◽  
René Heller

We present a new method to detect planetary transits from time-series photometry, the transit least squares (TLS) algorithm. TLS searches for transit-like features while taking the stellar limb darkening and planetary ingress and egress into account. We have optimized TLS for both signal detection efficiency (SDE) of small planets and computational speed. TLS analyses the entire, unbinned phase-folded light curve. We compensated for the higher computational load by (i.) using algorithms such as “Mergesort” (for the trial orbital phases) and by (ii.) restricting the trial transit durations to a smaller range that encompasses all known planets, and using stellar density priors where available. A typical K2 light curve, including 80 d of observations at a cadence of 30 min, can be searched with TLS in ∼10 s real time on a standard laptop computer, as fast as the widely used box least squares (BLS) algorithm. We perform a transit injection-retrieval experiment of Earth-sized planets around sun-like stars using synthetic light curves with 110 ppm white noise per 30 min cadence, corresponding to a photometrically quiet KP = 12 star observed with Kepler. We determine the SDE thresholds for both BLS and TLS to reach a false positive rate of 1% to be SDE = 7 in both cases. The resulting true positive (or recovery) rates are ∼93% for TLS and ∼76% for BLS, implying more reliable detections with TLS. We also test TLS with the K2 light curve of the TRAPPIST-1 system and find six of seven Earth-sized planets using an iterative search for increasingly lower signal detection efficiency, the phase-folded transit of the seventh planet being affected by a stellar flare. TLS is more reliable than BLS in finding any kind of transiting planet but it is particularly suited for the detection of small planets in long time series from Kepler, TESS, and PLATO. We make our python implementation of TLS publicly available.


2021 ◽  
Vol 13 (4) ◽  
pp. 798
Author(s):  
Moritz Bruggisser ◽  
Wouter Dorigo ◽  
Alena Dostálová ◽  
Markus Hollaus ◽  
Claudio Navacchi ◽  
...  

With the increasing occurrence of forest fires in the mid-latitudes and the alpine region, fire risk assessments become important in these regions. Fuel assessments involve the collection of information on forest structure as, e.g., the stand height or the stand density. The potential of airborne laser scanning (ALS) to provide accurate forest structure information has been demonstrated in several studies. Yet, flight acquisitions at the state level are carried out in intervals of typically five to ten years in Central Europe, which often makes the information outdated. The Sentinel-1 (S-1) synthetic aperture radar mission provides freely accessible earth observation (EO) data with short revisit times of 6 days. Forest structure information derived from this data source could, therefore, be used to update the respective ALS descriptors. In our study, we investigated the potential of S-1 time series to derive stand height and fractional cover, which is a measure of the stand density, over a temperate deciduous forest in Austria. A random forest (RF) model was used for this task, which was trained using ALS-derived forest structure parameters from 2018. The comparison of the estimated mean stand height from S-1 time series with the ALS derived stand height shows a root mean square error (RMSE) of 4.76 m and a bias of 0.09 m on a 100 m cell size, while fractional cover can be retrieved with an RMSE of 0.08 and a bias of 0.0. However, the predictions reveal a tendency to underestimate stand height and fractional cover for high-growing stands and dense areas, respectively. The stratified selection of the training set, which we investigated in order to achieve a more homogeneous distribution of the metrics for training, mitigates the underestimation tendency to some degree, yet, cannot fully eliminate it. We subsequently applied the trained model to S-1 time series of 2017 and 2019, respectively. The computed difference between the predictions suggests that large decreases in the forest height structure in this two-year interval become apparent from our RF-model, while inter-annual forest growth cannot be measured. The spatial patterns of the predicted forest height, however, are similar for both years (Pearson’s R = 0.89). Therefore, we consider that S-1 time series in combination with machine learning techniques can be applied for the derivation of forest structure information in an operational way.


2013 ◽  
Vol 1 (3) ◽  
pp. 2455-2493 ◽  
Author(s):  
L. Bressan ◽  
F. Zaniboni ◽  
S. Tinti

Abstract. Coastal tide-gauges play a very important role in a Tsunami Warning System, since sea-level data are needed for a correct evaluation of the tsunami threat and the tsunami arrival has to be recognised as early as possible. Real-time tsunami detection algorithms serve this purpose. For an efficient detection they have to be calibrated and adapted to the specific local characteristics of the site where they are installed, which is easily done when the station has recorded a sufficiently large number of tsunamis. In this case the recorded database can be used to select the best set of parameters enhancing the discrimination power of the algorithm and minimizing the detection time. This chance is however rare, since most of the coastal tide-gauge stations, either historical or of new installation, have recorded only a few tsunamis in their lifetime, if not any. In this case calibration must be carried out by using synthetic tsunami signals, which poses the problem of how to generate them and how to use them. This paper investigates this issue and proposes a calibration approach by using as an example a specific case, that is the calibration of a real-time detection algorithm called TEDA for two stations, namely Tremestieri and Catania, in eastern Sicily, Italy, that have been recently installed in the frame of the Italian project TSUNET, aiming at improving the tsunami monitoring capacity in a region that is one of the most hazardous tsunami areas of Italy and of the Mediterranean.


2013 ◽  
Vol 13 (12) ◽  
pp. 3129-3144 ◽  
Author(s):  
L. Bressan ◽  
F. Zaniboni ◽  
S. Tinti

Abstract. Coastal tide gauges play a very important role in a tsunami warning system, since sea-level data are needed for a correct evaluation of the tsunami threat, and the tsunami arrival has to be recognized as early as possible. Real-time tsunami detection algorithms serve this purpose. For an efficient detection, they have to be calibrated and adapted to the specific local characteristics of the site where they are installed, which is easily done when the station has recorded a sufficiently large number of tsunamis. In this case the recorded database can be used to select the best set of parameters enhancing the discrimination power of the algorithm and minimizing the detection time. This chance is however rare, since most of the coastal tide-gauge stations, either historical or of new installation, have recorded only a few tsunamis in their lifetimes, if any. In this case calibration must be carried out by using synthetic tsunami signals, which poses the problem of how to generate them and how to use them. This paper investigates this issue and proposes a calibration approach by using as an example a specific case, which is the calibration of a real-time detection algorithm called TEDA (Tsunami Early Detection Algorithm) for two stations (namely Tremestieri and Catania) in eastern Sicily, Italy, which were recently installed in the frame of the Italian project TSUNET, aiming at improving the tsunami monitoring capacity in a region that is one of the most hazardous tsunami areas of Italy and of the Mediterranean.


2021 ◽  
Vol 9 ◽  
Author(s):  
Chen Li ◽  
Gaoqi Liang ◽  
Huan Zhao ◽  
Guo Chen

Event detection is an important application in demand-side management. Precise event detection algorithms can improve the accuracy of non-intrusive load monitoring (NILM) and energy disaggregation models. Existing event detection algorithms can be divided into four categories: rule-based, statistics-based, conventional machine learning, and deep learning. The rule-based approach entails hand-crafted feature engineering and carefully calibrated thresholds; the accuracies of statistics-based and conventional machine learning methods are inferior to the deep learning algorithms due to their limited ability to extract complex features. Deep learning models require a long training time and are hard to interpret. This paper proposes a novel algorithm for load event detection in smart homes based on wide and deep learning that combines the convolutional neural network (CNN) and the soft-max regression (SMR). The deep model extracts the power time series patterns and the wide model utilizes the percentile information of the power time series. A randomized sparse backpropagation (RSB) algorithm for weight filters is proposed to improve the robustness of the standard wide-deep model. Compared to the standard wide-deep, pure CNN, and SMR models, the hybrid wide-deep model powered by RSB demonstrates its superiority in terms of accuracy, convergence speed, and robustness.


2016 ◽  
Vol 16 (12) ◽  
pp. 2603-2622
Author(s):  
Jun-Whan Lee ◽  
Sun-Cheon Park ◽  
Duk Kee Lee ◽  
Jong Ho Lee

Abstract. Timely detection of tsunamis with water level records is a critical but logistically challenging task because of outliers and gaps. Since tsunami detection algorithms require several hours of past data, outliers could cause false alarms, and gaps can stop the tsunami detection algorithm even after the recording is restarted. In order to avoid such false alarms and time delays, we propose the Tsunami Arrival time Detection System (TADS), which can be applied to discontinuous time series data with outliers. TADS consists of three algorithms, outlier removal, gap filling, and tsunami detection, which are designed to update whenever new data are acquired. After calibrating the thresholds and parameters for the Ulleung-do surge gauge located in the East Sea (Sea of Japan), Korea, the performance of TADS was discussed based on a 1-year dataset with historical tsunamis and synthetic tsunamis. The results show that the overall performance of TADS is effective in detecting a tsunami signal superimposed on both outliers and gaps.


2021 ◽  
Author(s):  
Moritz Bruggisser ◽  
Wouter Dorigo ◽  
Alena Dostálová ◽  
Markus Hollaus ◽  
Claudio Navacchi ◽  
...  

<p>The assessment of forest fire risk has recently gained interest in countries of Central Europe and the alpine region since the occurrence of forest fires is expected to increase with a changing climate. Information on forest fuel structure, which is related to forest structure, is a key component in such assessments. Forest structure information can be derived from airborne laser scanning (ALS) data, whose value for the derivation of respective metrics at a high accuracy level has been demonstrated in numerous studies over the last years.</p><p>Yet, the temporal resolution of ALS data is low as flight missions are typically carried out in time intervals of five to ten years in Central Europe. ALS-derived forest structure descriptors for fire risk assessments, therefore, are often outdated. Open access earth observation data offer the potential to fill these information gaps. Data provided by synthetic aperture radar (SAR) sensors, in particular, are of interest in this context since this technology has a known sensitivity to the vegetation structure and acquires data independent of weather or daylight conditions.</p><p>In our study, we investigate the potential to derive forest structure descriptors from time series of Sentinel-1 (S-1) SAR data for a deciduous forest site in the Eastern part of Austria. We focus on forest stand height and fractional cover, which is a measure for forest density, as both of these components impact forest fire propagation and ignition. The two structure metrics are estimated using a random forest (RF) model, which takes a total of 36 predictors as input, which we compute from the S-1 time series. The model is trained using ALS-derived structure metrics acquired during the same year as the S-1 data.</p><p>We estimated stand height with a root mean square error (RMSE) of 4.76 m and a bias of 0.09 m at 100 m resolution, while the RMSE for the fractional cover estimation is 0.08 with a bias of zero at the same resolution. The spatial comparison of the structure predictions with the ALS reference further shows that the general structure is well reproduced. Yet, fine scale variations cannot be completely reproduced by the S1-derived structure products, and the height of tall stands and very dense canopy parts are underestimated. Due to the high correlation of the predicted values to the reference (Pearson’s R of 0.88 and 0.94 for the stand height and the fractional cover, respectively), we consider S-1 time series in combination with ALS data with low temporal resolution and machine learning techniques to be a reliable data source and workflow for regularly (e.g. < yearly) updating ALS structure information in an operational way.</p>


Author(s):  
Samuel Humphries ◽  
Trevor Parker ◽  
Bryan Jonas ◽  
Bryan Adams ◽  
Nicholas J Clark

Quick identification of building and roads is critical for execution of tactical US military operations in an urban environment. To this end, a gridded, referenced, satellite images of an objective, often referred to as a gridded reference graphic or GRG, has become a standard product developed during intelligence preparation of the environment. At present, operational units identify key infrastructure by hand through the work of individual intelligence officers. Recent advances in Convolutional Neural Networks, however, allows for this process to be streamlined through the use of object detection algorithms. In this paper, we describe an object detection algorithm designed to quickly identify and label both buildings and road intersections present in an image. Our work leverages both the U-Net architecture as well the SpaceNet data corpus to produce an algorithm that accurately identifies a large breadth of buildings and different types of roads. In addition to predicting buildings and roads, our model numerically labels each building by means of a contour finding algorithm. Most importantly, the dual U-Net model is capable of predicting buildings and roads on a diverse set of test images and using these predictions to produce clean GRGs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shaheen Syed ◽  
Bente Morseth ◽  
Laila A. Hopstock ◽  
Alexander Horsch

AbstractTo date, non-wear detection algorithms commonly employ a 30, 60, or even 90 mins interval or window in which acceleration values need to be below a threshold value. A major drawback of such intervals is that they need to be long enough to prevent false positives (type I errors), while short enough to prevent false negatives (type II errors), which limits detecting both short and longer episodes of non-wear time. In this paper, we propose a novel non-wear detection algorithm that eliminates the need for an interval. Rather than inspecting acceleration within intervals, we explore acceleration right before and right after an episode of non-wear time. We trained a deep convolutional neural network that was able to infer non-wear time by detecting when the accelerometer was removed and when it was placed back on again. We evaluate our algorithm against several baseline and existing non-wear algorithms, and our algorithm achieves a perfect precision, a recall of 0.9962, and an F1 score of 0.9981, outperforming all evaluated algorithms. Although our algorithm was developed using patterns learned from a hip-worn accelerometer, we propose algorithmic steps that can easily be applied to a wrist-worn accelerometer and a retrained classification model.


Sign in / Sign up

Export Citation Format

Share Document