Revamping the GFZ Energy Magnitude computation procedure to establish a new service

Author(s):  
Angelo Strollo ◽  
Domenico Di Giacomo ◽  
Dino Bindi ◽  
Riccardo Zaccarelli

<p>Location and magnitude are the primary information released by any seismological observatory to characterize an earthquake. Nowadays, the size of large enough earthquakes are routinely measured in terms of released seismic moment (moment magnitude, Mw). Whereas events with Mw above about 5.5 populate seismological archives connected to global monitoring networks, the moment magnitude of smaller events require the analysis of regional and local dense networks, or the establishment of empirical relationships to convert other magnitude scales into Mw (e.g., local magnitude to moment magnitude conversions). Since Mw is constructed over a physical parameter, it does not saturate. Moreover, being the seismic moment connected to tectonic features such as fault area and the average slip, Mw became the reference magnitude for seismic hazard studies. Although Mw accomplishes perfectly the task of characterizing the earthquake size, it does not provide the most complete view about the earthquake strength since Mw is insensitive to changes in the rupture dynamics. An assessment of the amount of the seismic energy released by an event (energy magnitude Me) is allowing to complement Mw with a measure of the earthquake size more suitable to evaluate the earthquake shaking potential.</p><p>Aiming at introducing soon a new real-time service providing Me for major earthquakes we are presenting in this study the results of benchmark tests against the procedure proposed by Di Giacomo et al., in 2008 [1] as well as the analysis performed on a larger data set including all major events available in the GEOFON catalogue with a published moment magnitude since 2011. The initial procedure has been translated to a python code within the Stream2segment package [2] and leveraging on EIDA and IRIS data services, more than 2000 station for ~5000 events since 2011 have been downloaded and processed. The large data set used and the real-time application pose new challenges, among them the teleseismic distance, the strongly unbalanced network and the real-time data flow making the data set used dynamic. We present and discuss here the effects of these complications and how we are tackling them towards the implementation of new service at GFZ computing Me in real-time.</p><p> </p><p><em>[1] Di Giacomo, D., Grosser, H., Parolai, S., Bormann, P., and Wang, R. (2008), Rapid determination of Me for strong to great shallow earthquakes, Geophys. Res. Lett., 35, L10308, doi:10.1029/2008GL033505.</em></p><p><em>[2] Riccardo Zaccarelli, Dino Bindi, Angelo Strollo, Javier Quinteros, Fabrice Cotton; Stream2segment: An Open‐Source Tool for Downloading, Processing, and Visualizing Massive Event‐Based Seismic Waveform Datasets. Seismological Research Letters ; 90 (5): 2028–2038. doi: https://doi.org/10.1785/0220180314</em></p>

J ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 147-153
Author(s):  
Paula Morella ◽  
María Pilar Lambán ◽  
Jesús Antonio Royo ◽  
Juan Carlos Sánchez

Among the new trends in technology that have emerged through the Industry 4.0, Cyber Physical Systems (CPS) and Internet of Things (IoT) are crucial for the real-time data acquisition. This data acquisition, together with its transformation in valuable information, are indispensable for the development of real-time indicators. Moreover, real-time indicators provide companies with a competitive advantage over the competition since they enhance the calculus and speed up the decision-making and failure detection. Our research highlights the advantages of real-time data acquisition for supply chains, developing indicators that would be impossible to achieve with traditional systems, improving the accuracy of the existing ones and enhancing the real-time decision-making. Moreover, it brings out the importance of integrating technologies 4.0 in industry, in this case, CPS and IoT, and establishes the main points for a future research agenda of this topic.


2020 ◽  
Vol 14 ◽  
pp. 174830262096239 ◽  
Author(s):  
Chuang Wang ◽  
Wenbo Du ◽  
Zhixiang Zhu ◽  
Zhifeng Yue

With the wide application of intelligent sensors and internet of things (IoT) in the smart job shop, a large number of real-time production data is collected. Accurate analysis of the collected data can help producers to make effective decisions. Compared with the traditional data processing methods, artificial intelligence, as the main big data analysis method, is more and more applied to the manufacturing industry. However, the ability of different AI models to process real-time data of smart job shop production is also different. Based on this, a real-time big data processing method for the job shop production process based on Long Short-Term Memory (LSTM) and Gate Recurrent Unit (GRU) is proposed. This method uses the historical production data extracted by the IoT job shop as the original data set, and after data preprocessing, uses the LSTM and GRU model to train and predict the real-time data of the job shop. Through the description and implementation of the model, it is compared with KNN, DT and traditional neural network model. The results show that in the real-time big data processing of production process, the performance of the LSTM and GRU models is superior to the traditional neural network, K nearest neighbor (KNN), decision tree (DT). When the performance is similar to LSTM, the training time of GRU is much lower than LSTM model.


1987 ◽  
Vol 58 (4) ◽  
pp. 119-124 ◽  
Author(s):  
Gail M. Atkinson ◽  
David M. Boore

Abstract A stochastic model of ground motion has been used as a basis for comparison of data and theoretically-predicted relations between mN (commonly denoted by mbLg) and moment magnitude for eastern North America (ENA) earthquakes. mN magnitudes are recomputed for several historical ENA earthquakes, to ensure consistency of definition and provide a meaningful data set. We show that by itself the magnitude relation cannot be used as a discriminant between two specific spectral scaling relations, one with constant stress and the other with stress increasing with seismic moment, that have been proposed for ENA earthquakes.


2021 ◽  
Vol 343 ◽  
pp. 03005
Author(s):  
Florina Chiscop ◽  
Bogdan Necula ◽  
Carmen Cristiana Cazacu ◽  
Cristian Eugen Stoica

The topic of this paper represents our research in the process of creating a virtual model (digital twin) for a fast-food company production chain starting with the moment when a customer launches an order, following with the processing of that order, until the customer receives it. The model will describe elements that are included in this process such as equipment, human resources and the necessary space that is needed to host this layout. The virtual model created in a simulation platform will be a replicate of a real fast-food company, thus helping us observe the real time dynamic of this production system. Using WITNESS HORIZON 23 we will construct the model of the layout based on real time data received from the fast-food company. This digital twin will be used to manage the production chain material flow, evaluating the performance of the system architecture in various scenarios. In order to obtain a diagnosis of the system’s performance we will simulate the workflow running through preliminary architecture in compliance with the real time behaviour to identify the bottlenecks and blockages in the flow trajectory. In the end we will propose two different optimised architectures for the fast-food company production chain.


2019 ◽  
Vol 31 (1) ◽  
pp. 265-290 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Fazal Ijaz ◽  
Muhammad Syafrudin ◽  
M. Alex Syaekhoni ◽  
Norma Latif Fitriyani ◽  
...  

PurposeThe purpose of this paper is to propose customer behavior analysis based on real-time data processing and association rule for digital signage-based online store (DSOS). The real-time data processing based on big data technology (such as NoSQL MongoDB and Apache Kafka) is utilized to handle the vast amount of customer behavior data.Design/methodology/approachIn order to extract customer behavior patterns, customers’ browsing history and transactional data from digital signage (DS) could be used as the input for decision making. First, the authors developed a DSOS and installed it in different locations, so that customers could have the experience of browsing and buying a product. Second, the real-time data processing system gathered customers’ browsing history and transaction data as it occurred. In addition, the authors utilized the association rule to extract useful information from customer behavior, so it may be used by the managers to efficiently enhance the service quality.FindingsFirst, as the number of customers and DS increases, the proposed system was capable of processing a gigantic amount of input data conveniently. Second, the data set showed that as the number of visit and shopping duration increases, the chance of products being purchased also increased. Third, by combining purchasing and browsing data from customers, the association rules from the frequent transaction pattern were achieved. Thus, the products will have a high possibility to be purchased if they are used as recommendations.Research limitations/implicationsThis research empirically supports the theory of association rule that frequent patterns, correlations or causal relationship found in various kinds of databases. The scope of the present study is limited to DSOS, although the findings can be interpreted and generalized in a global business scenario.Practical implicationsThe proposed system is expected to help management in taking decisions such as improving the layout of the DS and providing better product suggestions to the customer.Social implicationsThe proposed system may be utilized to promote green products to the customer, having a positive impact on sustainability.Originality/valueThe key novelty of the present study lies in system development based on big data technology to handle the enormous amounts of data as well as analyzing the customer behavior in real time in the DSOS. The real-time data processing based on big data technology (such as NoSQL MongoDB and Apache Kafka) is used to handle the vast amount of customer behavior data. In addition, the present study proposed association rule to extract useful information from customer behavior. These results can be used for promotion as well as relevant product recommendations to DSOS customers. Besides in today’s changing retail environment, analyzing the customer behavior in real time in DSOS helps to attract and retain customers more efficiently and effectively, and retailers can get a competitive advantage over their competitors.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sandeep Kumar Singh ◽  
Mamata Jenamani

Purpose The purpose of this paper is to design a supply chain database schema for Cassandra to store real-time data generated by Radio Frequency IDentification technology in a traceability system. Design/methodology/approach The real-time data generated in such traceability systems are of high frequency and volume, making it difficult to handle by traditional relational database technologies. To overcome this difficulty, a NoSQL database repository based on Casandra is proposed. The efficacy of the proposed schema is compared with two such databases, document-based MongoDB and column family-based Cassandra, which are suitable for storing traceability data. Findings The proposed Cassandra-based data repository outperforms the traditional Structured Query Language-based and MongoDB system from the literature in terms of concurrent reading, and works at par with respect to writing and updating of tracing queries. Originality/value The proposed schema is able to store the real-time data generated in a supply chain with low latency. To test the performance of the Cassandra-based data repository, a test-bed is designed in the lab and supply chain operations of Indian Public Distribution System are simulated to generate data.


2016 ◽  
Vol 41 (1) ◽  
pp. 11-23
Author(s):  
Michael Takeo Magruder ◽  
Jeremy Pilcher

Michael Takeo Magruder, visual artist and researcher, discusses his digital and new media art and practice with Jeremy Pilcher, lawyer and academic, whose research engages with the intersection of art and law. Takeo's work asks viewers to question their relationship both to and within the real-time data flows generated by emerging technologies and the implications these have for archives. His art concerns the way institutions use such systems to create narratives that structure societies. This conversation discusses how Takeo's practice invites us, as individuals, to critically reflect on the implications of the stories that are both told to and about us by using gathered and distributed data.


Sign in / Sign up

Export Citation Format

Share Document