Rikitake Law, relating precursor time and earthquake magnitude, confirmed by Swarm satellite data

Author(s):  
Saioa A. Campuzano ◽  
Gianfranco Cianchini ◽  
Angelo De Santis ◽  
Dedalo Marchetti ◽  
Loredana Perrone ◽  
...  

<p>Rikitake [1987] studied different types of ground earthquake precursors and presented an empirical law (for what he called “precursors of the 1st kind”) expressing a linear relationship between the logarithm of the anomaly precursor time and the earthquake magnitude. To look for possible in-situ ionospheric precursors of large (M5.5+) earthquakes, here we analyse a long-term time series data from the three-satellite Swarm constellation, in particular electron density and magnetic field data. We define the anomalies statistically in the whole space-time interval of interest and use a superposed epoch approach to study the possible relation with the earthquakes. We find some clear concentrations of electron density and magnetic anomalies from several months to a few days before the earthquake occurrences. Such anomaly clustering is, in general, statistically significant with respect to homogeneous random simulations, supporting a coupling of the lithosphere with the above atmosphere and ionosphere during the preparation phase of earthquakes. Finally, by investigating different earthquake magnitude ranges, we confirm the Rikitake empirical law between ionospheric anomaly precursor time and earthquake magnitude. Our work represents the first time that this empirical law has been confirmed for satellite data. We also explain this empirical law with a diffusion model of lithospheric stress.</p>

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
A. De Santis ◽  
D. Marchetti ◽  
F. J. Pavón-Carrasco ◽  
G. Cianchini ◽  
L. Perrone ◽  
...  

AbstractThe study of the preparation phase of large earthquakes is essential to understand the physical processes involved, and potentially useful also to develop a future reliable short-term warning system. Here we analyse electron density and magnetic field data measured by Swarm three-satellite constellation for 4.7 years, to look for possible in-situ ionospheric precursors of large earthquakes to study the interactions between the lithosphere and the above atmosphere and ionosphere, in what is called the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC). We define these anomalies statistically in the whole space-time interval of interest and use a Worldwide Statistical Correlation (WSC) analysis through a superposed epoch approach to study the possible relation with the earthquakes. We find some clear concentrations of electron density and magnetic anomalies from more than two months to some days before the earthquake occurrences. Such anomaly clustering is, in general, statistically significant with respect to homogeneous random simulations, supporting a LAIC during the preparation phase of earthquakes. By investigating different earthquake magnitude ranges, not only do we confirm the well-known Rikitake empirical law between ionospheric anomaly precursor time and earthquake magnitude, but we also give more reliability to the seismic source origin for many of the identified anomalies.


Water ◽  
2021 ◽  
Vol 13 (14) ◽  
pp. 1944
Author(s):  
Haitham H. Mahmoud ◽  
Wenyan Wu ◽  
Yonghao Wang

This work develops a toolbox called WDSchain on MATLAB that can simulate blockchain on water distribution systems (WDS). WDSchain can import data from Excel and EPANET water modelling software. It extends the EPANET to enable simulation blockchain of the hydraulic data at any intended nodes. Using WDSchain will strengthen network automation and the security in WDS. WDSchain can process time-series data with two simulation modes: (1) static blockchain, which takes a snapshot of one-time interval data of all nodes in WDS as input and output into chained blocks at a time, and (2) dynamic blockchain, which takes all simulated time-series data of all the nodes as input and establishes chained blocks at the simulated time. Five consensus mechanisms are developed in WDSchain to provide data at different security levels using PoW, PoT, PoV, PoA, and PoAuth. Five different sizes of WDS are simulated in WDSchain for performance evaluation. The results show that a trade-off is needed between the system complexity and security level for data validation. The WDSchain provides a methodology to further explore the data validation using Blockchain to WDS. The limitations of WDSchain do not consider selection of blockchain nodes and broadcasting delay compared to commercial blockchain platforms.


2020 ◽  
Vol 2020 (1) ◽  
pp. 98-117
Author(s):  
Jyoti U. Devkota

Abstract The nightfires illuminated on the earth surface are caught by the satellite. These are emitted by various sources such as gas flares, biomass burning, volcanoes, and industrial sites such as steel mills. Amount of nightfires in an area is a proxy indicator of fuel consumption and CO2 emission. In this paper the behavior of radiant heat (RH) data produced by nightfire is minutely analyzed over a period of 75 hour; the geographical coordinates of energy sources generating these values are not considered. Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS DNB) satellite earth observation nightfire data were used. These 75 hours and 28252 observations time series RH (unit W) data is from 2 September 2018 to 6 September 2018. The dynamics of change in the overall behavior these data and with respect to time and irrespective of its geographical occurrence is studied and presented here. Different statistical methodologies are also used to identify hidden groups and patterns which are not obvious by remote sensing. Underlying groups and clusters are formed using Cluster Analysis and Discriminant Analysis. The behavior of RH for three consecutive days is studied with the technique Analysis of Variance. Cubic Spline Interpolation and merging has been done to create a time series data occurring at equal minute time interval. The time series data is decomposed to study the effect of various components. The behavior of this data is also analyzed in frequency domain by study of period, amplitude and the spectrum.


2017 ◽  
Author(s):  
Solveig H. Winsvold ◽  
Andreas Kääb ◽  
Christopher Nuth ◽  
Liss M. Andreassen ◽  
Ward van Pelt ◽  
...  

Abstract. With dense SAR satellite data time-series it is possible to map surface and subsurface glacier properties that vary in time. On Sentinel-1A and Radarsat-2 backscatter images over mainland Norway and Svalbard, we have used descriptive methods for outlining the possibilities of using SAR time-series for mapping glaciers. We present five application scenarios, where the first shows potential for tracking transient snow lines with SAR backscatter time-series, and correlates with both optical satellite images (Sentinel-2A and Landsat 8) and equilibrium line altitudes derived from in situ surface mass balance data. In the second application scenario, time-series representation of glacier facies corresponding to SAR glacier zones shows potential for a more accurate delineation of the zones and how they change in time. The third application scenario investigates the firn evolution using dense SAR backscatter time-series together with a coupled energy balance and multi-layer firn model. We find strong correlation between backscatter signals with both the modeled firn air-content and modeled wetness in the firn. In the fourth application scenario, we highlight how winter rain events can be detected in SAR time-series, revealing important information about the area extent of internal accumulation. Finally, in the last application scenario, averaged summer SAR images were found to have potential in assisting the process of mapping glaciers outlines, especially in the presence of seasonal snow. Altogether we present examples of how to map glaciers and to further understand glaciological processes using the existing and future massive amount of multi-sensor time-series data. Our results reveal the potential of satellite imagery for automatically derived products as important input in modeling assessments and glacier change analysis.


2019 ◽  
Author(s):  
Girish L

Network and Cloud Data Centers generate a lot of data every second, this data can be collected as a time series data. A time series is a sequence taken at successive equally spaced points in time, that means at a particular time interval to a specific time, the values of specific data that was taken is known as a data of a time series. This time series data can be collected using system metrics like CPU, Memory, and Disk utilization. The TICK Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data incredibly easy. As a data collector, the authors are using both Telegraf and Collectd, for storing and analyzing data and the time series database InfluxDB. For plotting and visualizing, they use Chronograf along with Grafana. Kapacitor is used for alert refinement and once system metrics usage exceeds the specified threshold, the alert is generated and sends it to the system admin.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2146
Author(s):  
Mikhail Zymbler ◽  
Elena Ivanova

Currently, big sensor data arise in a wide spectrum of Industry 4.0, Internet of Things, and Smart City applications. In such subject domains, sensors tend to have a high frequency and produce massive time series in a relatively short time interval. The data collected from the sensors are subject to mining in order to make strategic decisions. In the article, we consider the problem of choosing a Time Series Database Management System (TSDBMS) to provide efficient storing and mining of big sensor data. We overview InfluxDB, OpenTSDB, and TimescaleDB, which are among the most popular state-of-the-art TSDBMSs, and represent different categories of such systems, namely native, add-ons over NoSQL systems, and add-ons over relational DBMSs (RDBMSs), respectively. Our overview shows that, at present, TSDBMSs offer a modest built-in toolset to mine big sensor data. This leads to the use of third-party mining systems and unwanted overhead costs due to exporting data outside a TSDBMS, data conversion, and so on. We propose an approach to managing and mining sensor data inside RDBMSs that exploits the Matrix Profile concept. A Matrix Profile is a data structure that annotates a time series through the index of and the distance to the nearest neighbor of each subsequence of the time series and serves as a basis to discover motifs, anomalies, and other time-series data mining primitives. This approach is implemented as a PostgreSQL extension that allows an application programmer both to compute matrix profiles and mining primitives and to represent them as relational tables. Experimental case studies show that our approach surpasses the above-mentioned out-of-TSDBMS competitors in terms of performance since it assumes that sensor data are mined inside a TSDBMS at no significant overhead costs.


Author(s):  
Roy Assaf ◽  
Anika Schumann

We demonstrate that CNN deep neural networks can not only be used for making predictions based on multivariate time series data, but also for explaining these predictions. This is important for a number of applications where predictions are the basis for decisions and actions. Hence, confidence in the prediction result is crucial. We design a two stage convolutional neural network architecture which uses particular kernel sizes. This allows us to utilise gradient based techniques for generating saliency maps for both the time dimension and the features. These are then used for explaining which features during which time interval are responsible for a given prediction, as well as explaining during which time intervals was the joint contribution of all features most important for that prediction. We demonstrate our approach for predicting the average energy production of photovoltaic power plants and for explaining these predictions.


2019 ◽  
Vol 12 (3) ◽  
pp. 82-89
Author(s):  
O. S. Vidmant

The use of new tools for economic data analysis in the last decade has led to significant improvements in forecasting. This is due to the relevance of the question, and the development of technologies that allow implementation of more complex models without resorting to the use of significant computing power. The constant volatility of the world indices forces all financial market players to improve risk management models and, at the same time, to revise the policy of capital investment. More stringent liquidity and transparency standards in relation to the financial sector also encourage participants to experiment with protective mechanisms and to create predictive algorithms that can not only reduce the losses from the volatility of financial instruments but also benefit from short-term investment manipulations. The article discusses the possibility of improving the efficiency of calculations in predicting the volatility by the models of tree ensembles using various methods of data analysis. As the key points of efficiency growth, the author studied the possibility of aggregation of financial time series data using several methods of calculation and prediction of variance: Standard, EWMA, ARCH, GARCH, and also analyzed the possibility of simplifying the calculations while reducing the correlation between the series. The author demonstrated the application of calculation methods on the basis of an array of historical price data (Open, High, Low, Close) and volume indicators (Volumes) of futures trading on the RTS index with a five-minute time interval and an annual set of historical data. The proposed method allows to reduce the cost of computing power and time for data processing in the analysis of short-term positions in the financial markets and to identify risks with a certain level of confidence probability.


2019 ◽  
Vol 290 ◽  
pp. 02002
Author(s):  
Crina Narcisa Deac ◽  
Gicu Calin Deac ◽  
Florina Chiscop ◽  
Cicerone Laurentiu Popa

Anomaly detection is a crucial analysis topic in the field of Industry 4.0 data mining as well as knowing what is the probability that a specific machine to go down due to a failure of a component in the next time interval. In this article, we used time series data collected from machines, from both classes - time series data which leads up to the failures of machines as well as data from healthy operational periods of the machine. We used telemetry data, error logs from still operational components, maintenance records comprising historical breakdowns and replacement component to build and compare several different models. The validation of the proposed methods was made by comparing the actual failures in the test data with the predicted component failures over the test data.


Author(s):  
Faruk H. Bursal ◽  
Benson H. Tongue

Abstract In this paper, a system identification algorithm based on Interpolated Mapping (IM) that was introduced in a previous paper is generalized to the case of data stemming from arbitrary time series. The motivation for the new algorithm is the need to identify nonlinear dynamics in continuous time from discrete-time data. This approach has great generality and is applicable to problems arising in many areas of science and engineering. In the original formulation, a map defined on a regular grid in the state space of a dynamical system was assumed to be given. For the formulation to become practically viable, however, the requirement of initial conditions being taken from such a regular grid needs to be dropped. In particular, one would like to use time series data, where the time interval between samples is identified with the mapping time step T. This paper is concerned with the resulting complications. Various options for extending the formulation are examined, and a choice is made in favor of a pre-processing algorithm for estimating the FS map based on local fits to the data set. The suggested algorithm also has smoothing properties that are desirable from the standpoint of noise reduction.


Sign in / Sign up

Export Citation Format

Share Document