scholarly journals Kamchatka seismic monitoring and Earthquake prediction system and its evolution. Main results of observations in 2016-2020

Author(s):  
Danila Chebrov ◽  
Sergey Tikhonov ◽  
Dmitry Droznin ◽  
Svetlana Droznina ◽  
Evgeny Matveenko ◽  
...  

In this paper we present brief review of results of Kamchatka Seismic Monitoring and Earthquake Prediction System operations in the last five years. In addition, the retrospective of development of hardware, equipment and software of the System performed. The main direction in the System evolution in this period concerned the creation and modernization of data acquiring and pro-cessing methods. One of main results is creation basic informational space, that includes all pro-cesses if seismic observations, from data acquiring till exchange (including external users) of da-ta processing results. In particular, the system of data storage was deeply modernized, high-speed access to the data archive was provides, high-performance computing clusters were deployed, all seismic stations were combined in the unified network. Development algorithms and software for data processing and seismic regime controlling was continued. Creation and development of the Seismological Data Informational System (SDIS) provide the access to seismic observations re-sults for research community. The service of automatic data exchange with external users was created and incorporated in SDIS. Kamchatka Seismic Monitoring and Earthquake Prediction System in 2016-2020 allowed registering and processing over 83 thousand tectonic and volcanic earthquakes. The complex studies for seven the strongest ones were conducted. Detailed analysis showed, that magnitude of completeness for regional scale is MLc=2.5, and for local scale (for example – volcano seismic monitoring) – MLc=–0.2.

Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


2014 ◽  
Vol 912-914 ◽  
pp. 1556-1560
Author(s):  
Sheng Kun Li ◽  
Cheng Qun Chu ◽  
Hai Liang Chen ◽  
Fang Ma

The large-capacity, high-speed and low power consumption become the new requirements for the data storage systems. In this paper, a high-performance storage module based on multiple NAND flash memory chips is presented to real-time massive data acquisition system. In order to achieve the miniaturization dimension and the high-speed data storage design requirements, the paper presents a small size and high-speed storage unit based on NAND flash, where the dimensions of the module can reach 33mm×33mm and the maximum rate is up to 60MB/s. Ensuring continuous and reliable operation requires a dedicated buffering for the data transmission. We analyze the elements and peculiarities of the flash memory chip and propose a multi-way architecture to speed up data access. The design of a multilevel high-speed buffer structure based on the field programmable gate array (FPGA) technology is introduced in the paper. The proposed system can be applicable to some portable digital equipment.


2017 ◽  
Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


2014 ◽  
Vol 971-973 ◽  
pp. 1581-1585 ◽  
Author(s):  
Jun Liu ◽  
Yan Tian ◽  
Wei Hao ◽  
Lei Qu

In order to meet the request of high-speed data exchange in embedded systems, this paper details the high-speed SRIO (Serial RapidIO) interface protocol and the process of SRIO access timing between the local endpoint devices and the remote endpoint devices. And also we implement the design of the new high-performance RapidIO interconnection between DSP and FPGA. Through the performance testing of SRIO data transmission system, experimental results show that the design can stably transfer data at high speed between processors.


2017 ◽  
Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6793
Author(s):  
Petr Ilgner ◽  
Petr Cika ◽  
Martin Stusek

Recent developments in massive machine-type communication (mMTC) scenarios have given rise to never-seen requirements, which triggered the Industry 4.0 revolution. The new scenarios bring even more pressure to comply with the reliability and communication security and enable flawless functionality of the critical infrastructure, e.g., smart grid infrastructure. We discuss typical network grid architecture, communication strategies, and methods for building scalable and high-speed data processing and storage platform. This paper focuses on the data transmissions using the sets of standards IEC 60870-6 (ICCP/TASE.2). The main goal is to introduce the TASE.2 traffic generator and the data collection back-end with the implemented load balancing functionality to understand the limits of current protocols used in the smart grids. To this end, the assessment framework enabling generating and collecting TASE.2 communication with long-term data storage providing high availability and load balancing capabilities was developed. The designed proof-of-concept supports complete cryptographic security and allows users to perform the complex testing and verification of the TASE.2 network nodes configuration. Implemented components were tested in a cloud-based Microsoft Azure environment in four geographically separated locations. The findings from the testing indicate the high performance and scalability of the proposed platform, allowing the proposed generator to be also used for high-speed load testing purposes. The load-balancing performance shows the CPU usage of the load-balancer below 15% while processing 5000 messages per second. This makes it possible to achieve up to a 7-fold improvement of performance resulting in processing up to 35,000 messages per second.


2018 ◽  
Vol 4 ◽  
pp. e144 ◽  
Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


2012 ◽  
Vol 433-440 ◽  
pp. 4565-4570
Author(s):  
Guo Sheng Xu

Due to the project in this article, a kind of image capture and processing system based on FPGA is proposed, the low cost high performance FPGA is selected as the main core, the design of the whole system including software and hardware are implemented. The system achieves to functions of the high -speed data collection, the high -speed video data compression the real time video data Network Transmission and the real time compression picture data storage. the data processed was transferred to PC through USB2.0 real-time to reconstruct defects microscopic images. Experimental results prove right and feasible by adopting the algorithm and scheme proposed in this paper.


Author(s):  
N. Yoshimura ◽  
K. Shirota ◽  
T. Etoh

One of the most important requirements for a high-performance EM, especially an analytical EM using a fine beam probe, is to prevent specimen contamination by providing a clean high vacuum in the vicinity of the specimen. However, in almost all commercial EMs, the pressure in the vicinity of the specimen under observation is usually more than ten times higher than the pressure measured at the punping line. The EM column inevitably requires the use of greased Viton O-rings for fine movement, and specimens and films need to be exchanged frequently and several attachments may also be exchanged. For these reasons, a high speed pumping system, as well as a clean vacuum system, is now required. A newly developed electron microscope, the JEM-100CX features clean high vacuum in the vicinity of the specimen, realized by the use of a CASCADE type diffusion pump system which has been essentially improved over its predeces- sorD employed on the JEM-100C.


Author(s):  
Marc H. Peeters ◽  
Max T. Otten

Over the past decades, the combination of energy-dispersive analysis of X-rays and scanning electron microscopy has proved to be a powerful tool for fast and reliable elemental characterization of a large variety of specimens. The technique has evolved rapidly from a purely qualitative characterization method to a reliable quantitative way of analysis. In the last 5 years, an increasing need for automation is observed, whereby energy-dispersive analysers control the beam and stage movement of the scanning electron microscope in order to collect digital X-ray images and perform unattended point analysis over multiple locations.The Philips High-speed Analysis of X-rays system (PHAX-Scan) makes use of the high performance dual-processor structure of the EDAX PV9900 analyser and the databus structure of the Philips series 500 scanning electron microscope to provide a highly automated, user-friendly and extremely fast microanalysis system. The software that runs on the hardware described above was specifically designed to provide the ultimate attainable speed on the system.


Sign in / Sign up

Export Citation Format

Share Document