scholarly journals The Iterative Processing Framework: A New Paradigm for Automatic Event Building

2019 ◽  
Vol 109 (6) ◽  
pp. 2501-2509
Author(s):  
Rigobert Tibi ◽  
Andre Encarnacao ◽  
Sanford Ballard ◽  
Christopher J. Young ◽  
Ronald Brogan ◽  
...  

Abstract In a traditional data‐processing pipeline, waveforms are acquired, a detector makes the signal detections (i.e., arrival times, slownesses, and azimuths) and passes them to an associator. The associator then links the detections to the fitting‐event hypotheses to generate an event bulletin. Most of the time, this traditional pipeline requires substantial human‐analyst involvement to improve the quality of the resulting event bulletin. For the year 2017, for example, International Data Center (IDC) analysts rejected about 40% of the events in the automatic bulletin and manually built 30% of the legitimate events. We propose an iterative processing framework (IPF) that includes a new data‐processing module that incorporates automatic analyst behaviors (auto analyst [AA]) into the event‐building pipeline. In the proposed framework, through an iterative process, the AA takes over many of the tasks traditionally performed by human analysts. These tasks can be grouped into two major processes: (1) evaluating small events with a low number of location‐defining arrival phases to improve their formation; and (2) scanning for and exploiting unassociated arrivals to form potential events missed by previous association runs. To test the proposed framework, we processed a two‐week period (15–28 May 2010) of the signal‐detections dataset from the IDC. Comparison with an expert analyst‐reviewed bulletin for the same time period suggests that IPF performs better than the traditional pipelines (IDC and baseline pipelines). Most of the additional events built by the AA are low‐magnitude events that were missed by these traditional pipelines. The AA also adds additional signal detections to existing events, which saves analyst time, even if the event locations are not significantly affected.

2015 ◽  
Vol 7 (18) ◽  
pp. 7715-7723 ◽  
Author(s):  
Hongbo Li ◽  
Quchao Zou ◽  
Ling Zou ◽  
Qin Wang ◽  
Kaiqi Su ◽  
...  

The system structure of the CIB detection instrument: cell-based impedance biosensor units, hardware module, and data processing module.


2016 ◽  
Vol 181 ◽  
pp. 139-146 ◽  
Author(s):  
Yingjie Xia ◽  
Jinlong Chen ◽  
Xindai Lu ◽  
Chunhui Wang ◽  
Chao Xu

Author(s):  
Jeffrey Hanson ◽  
Ronan Le Bras ◽  
Douglas Brumbaugh ◽  
Jerry Guern ◽  
Paul Dysart ◽  
...  

1994 ◽  
Vol 37 (3) ◽  
Author(s):  
F. Ringdal

The UN Conference on Disarmament's Group of Scientific Experts (GSE) was established in 1976 to consider international co operative measures to detect and identify seismic events. Over the years, the GSE has developed and tested several concepts for an International Seismic Monitoring System (ISMS) for the purpose of assisting in the verification of a potential comprehensive test ban treaty. The GSE is now planning its third global technical test. (GSETT 3) in order to test new and revisled concepts for an ISMS. GSETT 3 wili be an unprecedented global effort to conduct an operationally realistic test of rapid collection, distribution and processing of seismie data. A global network of seismograph stations will provide data to an International Data Center, where the data will be processed an results made available to participants. The full scaIe phase of GSETT 3 is scheduled to begin in January 1995.


2021 ◽  
Vol 2048 (1) ◽  
pp. 012028
Author(s):  
Lerui Zhang ◽  
Ding She ◽  
Lei Shi ◽  
Richard Chambon ◽  
Alain Hébert

Abstract The XPZ code was previously developed for the lattice physics computation in High Temperature Gas-cooled Reactors (HTGRs), which adopted the multi-group cross section library converted from the existing open-source DRAGON library. In this paper, a new format of multi-group cross section library named XPZLIB has been implemented in XPZ code. XPZLIB is designed in binary and HDF5 formats, including detailed data contents for resonance, transport and depletion calculations. A new data-processing module named XPZR is developed based on NJOY-2016 to generate nuclide dependent XPZLIB from the most recent evaluated nuclear data, and besides, the PyNjoy-2016 system is developed for automatic generation of integrated XPZLIB including a complete set of nuclides. The new generated XPZLIB is presented with the XPZ code. Numerical results demonstrate the accuracy of the new library XPZLIB and the reliability of the data processing scheme. Moreover, the influence of different versions of ENDF/B data is investigated.


Author(s):  
Daniel Warneke

In recent years, so-called Infrastructure as a Service (IaaS) clouds have become increasingly popular as a flexible and inexpensive platform for ad-hoc parallel data processing. Major players in the cloud computing space like Amazon EC2 have already recognized this trend and started to create special offers which bundle their compute platform with existing software frameworks for these kinds of applications. However, the data processing frameworks which are currently used in these offers have been designed for static, homogeneous cluster systems and do not support the new features which distinguish the cloud platform. This chapter examines the characteristics of IaaS clouds with special regard to massively-parallel data processing. The author highlights use cases which are currently poorly supported by existing parallel data processing frameworks and explains how a tighter integration between the processing framework and the underlying cloud system can help to lower the monetary processing cost for the cloud customer. As a proof of concept, the author presents the parallel data processing framework Nephele, and compares its cost efficiency against the one of the well-known software Hadoop.


Sign in / Sign up

Export Citation Format

Share Document