scholarly journals Monitoring Forest Dynamics in the Andean Amazon: The Applicability of Breakpoint Detection Methods Using Landsat Time-Series and Genetic Algorithms

2017 ◽  
Vol 9 (1) ◽  
pp. 68 ◽  
Author(s):  
Fabián Santos ◽  
Olena Dubovyk ◽  
Gunter Menz
2016 ◽  
Vol 8 (1) ◽  
pp. 78-98 ◽  
Author(s):  
Dániel Topál ◽  
István Matyasovszkyt ◽  
Zoltán Kern ◽  
István Gábor Hatvani

AbstractTime series often contain breakpoints of different origin, i.e. breakpoints, caused by (i) shifts in trend, (ii) other changes in trend and/or, (iii) changes in variance. In the present study, artificially generated time series with white and red noise structures are analyzed using three recently developed breakpoint detection methods. The time series are modified so that the exact “locations” of the artificial breakpoints are prescribed, making it possible to evaluate the methods exactly. Hence, the study provides a deeper insight into the behaviour of the three different breakpoint detection methods. Utilizing this experience can help solving breakpoint detection problems in real-life data sets, as is demonstrated with two examples taken from the fields of paleoclimate research and petrology.


2021 ◽  
Vol 13 (15) ◽  
pp. 2869
Author(s):  
MohammadAli Hemati ◽  
Mahdi Hasanlou ◽  
Masoud Mahdianpari ◽  
Fariba Mohammadimanesh

With uninterrupted space-based data collection since 1972, Landsat plays a key role in systematic monitoring of the Earth’s surface, enabled by an extensive and free, radiometrically consistent, global archive of imagery. Governments and international organizations rely on Landsat time series for monitoring and deriving a systematic understanding of the dynamics of the Earth’s surface at a spatial scale relevant to management, scientific inquiry, and policy development. In this study, we identify trends in Landsat-informed change detection studies by surveying 50 years of published applications, processing, and change detection methods. Specifically, a representative database was created resulting in 490 relevant journal articles derived from the Web of Science and Scopus. From these articles, we provide a review of recent developments, opportunities, and trends in Landsat change detection studies. The impact of the Landsat free and open data policy in 2008 is evident in the literature as a turning point in the number and nature of change detection studies. Based upon the search terms used and articles included, average number of Landsat images used in studies increased from 10 images before 2008 to 100,000 images in 2020. The 2008 opening of the Landsat archive resulted in a marked increase in the number of images used per study, typically providing the basis for the other trends in evidence. These key trends include an increase in automated processing, use of analysis-ready data (especially those with atmospheric correction), and use of cloud computing platforms, all over increasing large areas. The nature of change methods has evolved from representative bi-temporal pairs to time series of images capturing dynamics and trends, capable of revealing both gradual and abrupt changes. The result also revealed a greater use of nonparametric classifiers for Landsat change detection analysis. Landsat-9, to be launched in September 2021, in combination with the continued operation of Landsat-8 and integration with Sentinel-2, enhances opportunities for improved monitoring of change over increasingly larger areas with greater intra- and interannual frequency.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Hitoshi Iuchi ◽  
Michiaki Hamada

Abstract Time-course experiments using parallel sequencers have the potential to uncover gradual changes in cells over time that cannot be observed in a two-point comparison. An essential step in time-series data analysis is the identification of temporal differentially expressed genes (TEGs) under two conditions (e.g. control versus case). Model-based approaches, which are typical TEG detection methods, often set one parameter (e.g. degree or degree of freedom) for one dataset. This approach risks modeling of linearly increasing genes with higher-order functions, or fitting of cyclic gene expression with linear functions, thereby leading to false positives/negatives. Here, we present a Jonckheere–Terpstra–Kendall (JTK)-based non-parametric algorithm for TEG detection. Benchmarks, using simulation data, show that the JTK-based approach outperforms existing methods, especially in long time-series experiments. Additionally, application of JTK in the analysis of time-series RNA-seq data from seven tissue types, across developmental stages in mouse and rat, suggested that the wave pattern contributes to the TEG identification of JTK, not the difference in expression levels. This result suggests that JTK is a suitable algorithm when focusing on expression patterns over time rather than expression levels, such as comparisons between different species. These results show that JTK is an excellent candidate for TEG detection.


Elem Sci Anth ◽  
2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Kai-Lan Chang ◽  
Martin G. Schultz ◽  
Xin Lan ◽  
Audra McClure-Begley ◽  
Irina Petropavlovskikh ◽  
...  

This paper is aimed at atmospheric scientists without formal training in statistical theory. Its goal is to (1) provide a critical review of the rationale for trend analysis of the time series typically encountered in the field of atmospheric chemistry, (2) describe a range of trend-detection methods, and (3) demonstrate effective means of conveying the results to a general audience. Trend detections in atmospheric chemical composition data are often challenged by a variety of sources of uncertainty, which often behave differently to other environmental phenomena such as temperature, precipitation rate, or stream flow, and may require specific methods depending on the science questions to be addressed. Some sources of uncertainty can be explicitly included in the model specification, such as autocorrelation and seasonality, but some inherent uncertainties are difficult to quantify, such as data heterogeneity and measurement uncertainty due to the combined effect of short and long term natural variability, instrumental stability, and aggregation of data from sparse sampling frequency. Failure to account for these uncertainties might result in an inappropriate inference of the trends and their estimation errors. On the other hand, the variation in extreme events might be interesting for different scientific questions, for example, the frequency of extremely high surface ozone events and their relevance to human health. In this study we aim to (1) review trend detection methods for addressing different levels of data complexity in different chemical species, (2) demonstrate that the incorporation of scientifically interpretable covariates can outperform pure numerical curve fitting techniques in terms of uncertainty reduction and improved predictability, (3) illustrate the study of trends based on extreme quantiles that can provide insight beyond standard mean or median based trend estimates, and (4) present an advanced method of quantifying regional trends based on the inter-site correlations of multisite data. All demonstrations are based on time series of observed trace gases relevant to atmospheric chemistry, but the methods can be applied to other environmental data sets.


2010 ◽  
Vol 4 ◽  
pp. BBI.S5983 ◽  
Author(s):  
Daisuke Tominaga

Time series of gene expression often exhibit periodic behavior under the influence of multiple signal pathways, and are represented by a model that incorporates multiple harmonics and noise. Most of these data, which are observed using DNA microarrays, consist of few sampling points in time, but most periodicity detection methods require a relatively large number of sampling points. We have previously developed a detection algorithm based on the discrete Fourier transform and Akaike's information criterion. Here we demonstrate the performance of the algorithm for small-sample time series data through a comparison with conventional and newly proposed periodicity detection methods based on a statistical analysis of the power of harmonics. We show that this method has higher sensitivity for data consisting of multiple harmonics, and is more robust against noise than other methods. Although “combinatorial explosion” occurs for large datasets, the computational time is not a problem for small-sample datasets. The MATLAB/GNU Octave script of the algorithm is available on the author's web site: http://www.cbrc.jp/%7Etominaga/piccolo/ .


2021 ◽  
Author(s):  
Stéphane Mermoz ◽  
Alexandre Bouvet ◽  
Marie Ballère ◽  
Thierry Koleck ◽  
Thuy Le Toan

<p>Over the last 25 years, the world’s forests have undergone substantial changes. Deforestation and forest degradation in particular contribute greatly to biodiversity loss through habitat destruction, soil erosion, terrestrial water cycle disturbances and anthropogenic CO2 emissions. In certain regions and countries, the changes have been more rapid, which is the case in the Greater Mekong sub-region recognized as deforestation hotspot (FAO, 2020). In this region, illegal and unsustainable logging and conversion of forests for agriculture, construction of dams and infrastructure are the direct causes of deforestation. Effective tools are therefore urgently needed to survey illegal logging operations which cause widespread concern in the region.</p><p>Monitoring systems based on optical data, such as the UMD/GLAD Deforestation alerts implemented on the Global Forest Watch platform, are limited by the important cloud cover which causes delays in the detections. However, it has been demonstrated in the last few years that forest losses can be timely monitored using dense time series of (synthetic aperture) radar data acquired by Sentinel-1 satellites, developed in the frame of the European Union’s Earth observation Copernicus programme. Ballère et al. (2021) showed for example that 80% of the forest losses due to gold mining in French Guiana are detected first by Sentinel-1-based forest loss detection methods compared with optical-based methods, sometimes by several months. Methods based on Sentinel-1 have been successfully applied at the local scale (Bouvet et al., 2018, Reiche et al., 2018) and can be adapted and tested at the national scale (Ballère et al., 2020).</p><p>We show here the main results of the SOFT project funded by ESA in the frame of the EO Science for Society open calls. The overall SOFT project goal is to provide validated forest loss maps every month over Vietnam, Cambodia and Laos with a minimum mapping unit of 0.04 ha, using Sentinel-1 data. The results confirm the analysis of the deforestation fronts published recently by the WWF (Pacheco et al., 2021), showing that Eastern Cambodia, and Southern and Northern Laos are currently forest disturbances hotspots.</p><p> </p><p>References:</p><p>Ballère et al., (2021). SAR data for tropical forest disturbance alerts in French Guiana: Benefit over optical imagery. <em>Remote Sensing of Environment</em>, <em>252</em>, 112159.</p><p>Bouvet et al., (2018). Use of the SAR shadowing effect for deforestation detection with Sentinel-1 time series. <em>Remote Sensing</em>, <em>10</em>(8), 1250.</p><p>FAO. Global Forest Resources Assessment; Technical Report; Food and Agriculture Association of the United-States: Rome, Italy, 2020.</p><p>Pacheco et al., 2021. Deforestation fronts: Drivers and responses in a changing world. WWF, Gland, Switzerland</p><div>Reiche et al., (2018). Improving near-real time deforestation monitoring in tropical dry forests by combining dense Sentinel-1 time series with Landsat and ALOS-2 PALSAR-2. <em>Remote Sensing of Environment</em>, <em>204</em>, 147-161.</div>


2019 ◽  
pp. 147592171988711
Author(s):  
Wen-Jun Cao ◽  
Shanli Zhang ◽  
Numa J Bertola ◽  
I F C Smith ◽  
C G Koh

Train wheel flats are formed when wheels slip on rails. Crucial for passenger comfort and the safe operation of train systems, early detection and quantification of wheel-flat severity without interrupting railway operations is a desirable and challenging goal. Our method involves identifying the wheel-flat size by using a model updating strategy based on dynamic measurements. Although measurement and modelling uncertainties influence the identification results, they are rarely taken into account in most wheel-flat detection methods. Another challenge is the interpretation of time series data from multiple sensors. In this article, the size of the wheel flat is identified using a model-falsification approach that explicitly includes uncertainties in both measurement and modelling. A two-step important point selection method is proposed to interpret high-dimensional time series in the context of inverse identification. Perceptually important points, which are consistent with the human visual identification process, are extracted and further selected using joint entropy as an information gain metric. The proposed model-based methodology is applied to a field train track test in Singapore. The results show that the wheel-flat size identified using the proposed methodology is within the range of true observations. In addition, it is also shown that the inclusion of measurement and modelling uncertainties is essential to accurately evaluate the wheel-flat size because identification without uncertainties may lead to an underestimation of the wheel-flat size.


Sign in / Sign up

Export Citation Format

Share Document