scholarly journals The influence of environmental microseismicity on detection and interpretation of small-magnitude events in a polar glacier setting

2020 ◽  
Vol 66 (259) ◽  
pp. 790-806
Author(s):  
Chris G. Carr ◽  
Joshua D. Carmichael ◽  
Erin C. Pettit ◽  
Martin Truffer

AbstractGlacial environments exhibit temporally variable microseismicity. To investigate how microseismicity influences event detection, we implement two noise-adaptive digital power detectors to process seismic data from Taylor Glacier, Antarctica. We add scaled icequake waveforms to the original data stream, run detectors on the hybrid data stream to estimate reliable detection magnitudes and compare analytical magnitudes predicted from an ice crack source model. We find that detection capability is influenced by environmental microseismicity for seismic events with source size comparable to thermal penetration depths. When event counts and minimum detectable event sizes change in the same direction (i.e. increase in event counts and minimum detectable event size), we interpret measured seismicity changes as ‘true’ seismicity changes rather than as changes in detection. Generally, one detector (two degree of freedom (2dof)) outperforms the other: it identifies more events, a more prominent summertime diurnal signal and maintains a higher detection capability. We conclude that real physical processes are responsible for the summertime diurnal inter-detector difference. One detector (3dof) identifies this process as environmental microseismicity; the other detector (2dof) identifies it as elevated waveform activity. Our analysis provides an example for minimizing detection biases and estimating source sizes when interpreting temporal seismicity patterns to better infer glacial seismogenic processes.

Author(s):  
Dan Yu ◽  
Farook Sattar

This chapter focuses on the issue of transaction tracking in multimedia distribution applications through digital watermarking terminology. The existing watermarking schemes are summarized and their assumptions as well as the limitations for tracking are analyzed. In particular, an Independent Component Analysis (ICA)-based watermarking scheme is proposed, which can overcome the problems of the existing watermarking schemes. Multiple watermarking technique is exploited—one watermark to identify the rightful owner of the work and the other one to identify the legal user of a copy of the work. In the absence of original data, watermark, embedding locations and strengths, the ICA-based watermarking scheme is introduced for efficient watermark extraction with some side information. The robustness of the proposed scheme against some common signal-processing attacks as well as the related future work are also presented. Finally, some challenging issues in multimedia transaction tracking through digital watermarking are discussed.


2020 ◽  
Vol 12 (9) ◽  
pp. 142
Author(s):  
Zhijun Wu ◽  
Bohua Cui

Aiming at the problem of low interconnection efficiency caused by the wide variety of data in SWIM (System-Wide Information Management) and the inconsistent data naming methods, this paper proposes a new TLC (Type-Length-Content) structure hybrid data naming scheme combined with Bloom filters. This solution can meet the uniqueness and durability requirements of SWIM data names, solve the “suffix loopholes” encountered in prefix-based route aggregation in hierarchical naming, and realize scalable and effective route state aggregation. Simulation verification results show that the hybrid naming scheme is better than prefix-based aggregation in the probability of route identification errors. In terms of search time, this scheme has increased by 17.8% and 18.2%, respectively, compared with the commonly used hierarchical and flat naming methods. Compared with the other two naming methods, scalability has increased by 19.1% and 18.4%, respectively.


Author(s):  
Mohammad Reza Ebrahimi Dishabi ◽  
Mohammad Abdollahi Azgomi

Most of the existing privacy preserving clustering (PPC) algorithms do not consider the worst case privacy guarantees and are based on heuristic notions. In addition, these algorithms do not run efficiently in the case of high dimensionality of data. In this paper, to alleviate these challenges, we propose a new PPC algorithm, which is based on Daubechies-2 wavelet transform (D2WT) and preserves the differential privacy notion. Differential privacy is the strong notion of privacy, which provides the worst case privacy guarantees. On the other hand, most of the existing differential-based PPC algorithms generate data with poor utility. If we apply differential privacy properties over the original raw data, the resulting data will offer lower quality of clustering (QOC) during the clustering analysis. Therefore, we use D2WT for the preprocessing of the original data before adding noise to the data. By applying D2WT to the original data, the resulting data not only contains lower dimension compared to the original data, but also can provide differential privacy guarantee with high QOC due to less noise addition. The proposed algorithm has been implemented and experimented over some well-known datasets. We also compare the proposed algorithm with some recently introduced algorithms based on utility and privacy degrees.


2019 ◽  
Vol 27 (4) ◽  
pp. 435-454 ◽  
Author(s):  
Gary King ◽  
Richard Nielsen

We show that propensity score matching (PSM), an enormously popular method of preprocessing data for causal inference, often accomplishes the opposite of its intended goal—thus increasing imbalance, inefficiency, model dependence, and bias. The weakness of PSM comes from its attempts to approximate a completely randomized experiment, rather than, as with other matching methods, a more efficient fully blocked randomized experiment. PSM is thus uniquely blind to the often large portion of imbalance that can be eliminated by approximating full blocking with other matching methods. Moreover, in data balanced enough to approximate complete randomization, either to begin with or after pruning some observations, PSM approximates random matching which, we show, increases imbalance even relative to the original data. Although these results suggest researchers replace PSM with one of the other available matching methods, propensity scores have other productive uses.


2020 ◽  
Author(s):  
Myunghyun Noh

<p>In most seismic studies, we prefer the earthquake catalog that covers a larger region and/or a longer period. We usually combine two or more catalogs to achieve this goal. When combining catalogs, however, care must be taken because their completeness is not identical so that unexpected flaws may be caused.</p><p>We tested the effect of combining inhomogeneous catalogs using the catalog of Korea Meteorological Administration (KMA). In fact, KMA provides a single catalog containing the earthquakes occurred in and around the whole Korean Peninsula. Like the other seismic networks, however, the configuration of the KMA seismic network is not uniform over its target monitoring region, so is the earthquake detection capability. The network is denser in the land than in the off-shore. Moreover, there are no seismic information available from North Korea. Based on these, we divided the KMA catalog into three sub-catalogs; SL, NL, and AO catalogs. The SL catalog contains the earthquakes occurred in the land of South Korea while the NL catalog contains those in the land of North Korea. The AO catalog contains all earthquakes occurred in the off-shore surrounding the peninsula.</p><p>The completeness of a catalog is expressed in terms of m<sub>c</sub>, the minimum magnitude above which no earthquakes are missing. We used the Chi-square algorithm by Noh (2017) to estimate the m<sub>c</sub>. It turned out, as expected, that the m<sub>c</sub> of the SL is the smallest among the three. Those of NL and AO are comparable. The m<sub>c</sub> of the catalog combining the SL and AO is larger than those of individual catalogs before combining. The m<sub>c</sub> is largest when combining all the three. If one needs more complete catalog, he or she had better divide the catalog into smaller ones based on the spatiotemporal detectability of the seismic network. Or, one may combine several catalogs to cover a larger region or a longer period at the expense of catalog completeness.</p>


Author(s):  
Tomoe Entani ◽  

Organizations are interested in exploiting the data from the other organizations for better analyses. Therefore, the data related policies of organizations should be sensitive to the data privacy issue, which has been widely discussed recently. The present study is focused on inter-group data usage for a relative evaluation. This research is based on the data envelopment analysis (DEA), which is used to measure the efficiency of a decision making unit (DMU) relatively employed within a group. In DEA, establishing an efficient frontier consisting of efficient DMUs is essential. We can obtain the efficiency values of a DMU by projecting it to the efficient frontier, and including in the efficiency interval via the interval DEA. When the original data of multiple groups are not open to each other, the alternative is to exchange the information corresponding to the efficient frontiers to estimate the efficiency intervals of a DMU in such a manner that the alternative is in the other groups. Therefore, in this paper, we propose a method to replace the efficient frontier with a weight vector set, from which it is not possible to reconstruct the original data. Considering the weight vector sets of multiple groups, a DMU has three types of efficiency intervals: in its own group, in each of the other groups, and in the integrated group. They provide rich insights on the DMU from a broad perspective, and this encourages inter-group data usage. In this process, we focus on two types of information reduction: one is from the efficient frontier to the weight vector set, and the other is from a union of the groups to the integrated group.


2019 ◽  
Vol 3 (2) ◽  
pp. 85
Author(s):  
Susastro Susastro ◽  
Novi Indah Riani

Vibration is one of the problems that must be reduced in a vehicle. There are many ways to reduce vibration in vehicles, one of them is by adding Dynamic vibration absorber (DVA). While Dual Dynamic vibration absorber (dDVA) is a DVA period that is able to move in the translational direction given to the system to reduce translation vibration and when there is resonance. Translation DVA is an additional type of time used to reduce the vibration of the translation direction. So far there is not much research related to the use of translational DVA to reduce rotational vibrations as well as translation. In this study, a study was conducted related to the use of independent double translational DVA (dDVA) to reduce translation vibrations as well as rotation of the beam. The research was conducted by modeling the system obtained into mathematical equations and simulations were carried out to determine the characteristics of vibrations that arise. In the simulation, one of the DVA periods is placed at the center of the main system period, while the other DVA period is given a change between the center period and the end of the system. The results of the study show that the maximum reduction in translational vibration is 95.51% and occurs when the absorber is placed at the center of the system, while the maximum rotation vibration reduction is 56.62% and is obtained when the system is given with an arm ratio of 1 and zero.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-20
Author(s):  
Hayder K. Fatlawi ◽  
Attila Kiss

Abstract Most typical data mining techniques are developed based on training the batch data which makes the task of mining the data stream represent a significant challenge. On the other hand, providing a mechanism to perform data mining operations without revealing the patient’s identity has increasing importance in the data mining field. In this work, a classification model with differential privacy is proposed for mining the medical data stream using Adaptive Random Forest (ARF). The experimental results of applying the proposed model on four medical datasets show that ARF mostly has a more stable performance over the other six techniques.


2020 ◽  
Vol 13 (10) ◽  
pp. 1779-1792
Author(s):  
Yan Li ◽  
Tingjian Ge ◽  
Cindy Chen

We study a practical problem of predicting the upcoming events in data streams using a novel approach. Treating event time orders as relationship types between event entities, we build a dynamic knowledge graph and use it to predict future event timing. A unique aspect of this knowledge graph embedding approach for prediction is that we enhance conventional knowledge graphs with the notion of "states"---in what we call the ephemeral state nodes---to characterize the state of a data stream over time. We devise a complete set of methods for learning relevant events, for building the event-order graph stream from the original data stream, for embedding and prediction, and for theoretically bounding the complexity. We evaluate our approach with four real world stream datasets and find that our method results in high precision and recall values for event timing prediction, ranging between 0.7 and nearly 1, significantly outperforming baseline approaches. Moreover, due to our choice of efficient translation-based embedding, the overall throughput that the stream system can handle, including continuous graph building, training, and event predictions, is over one thousand to sixty thousand tuples per second even on a personal computer---which is especially important in resource constrained environments, including edge computing.


2018 ◽  
Vol 49 (4) ◽  
pp. 1228-1244 ◽  
Author(s):  
Jinyin Chen ◽  
Xiang Lin ◽  
Qi Xuan ◽  
Yun Xiang

Sign in / Sign up

Export Citation Format

Share Document