Detection, location, and source mechanism determination with large noise variations in surface microseismic monitoring

Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. KS197-KS206
Author(s):  
Dmitry Alexandrov ◽  
Leo Eisner ◽  
Umair bin Waheed ◽  
SanLinn I. Kaka ◽  
Stewart Alan Greenhalgh

Microseismic monitoring aims at detecting as weak events as possible and providing reliable locations and source mechanisms for these events. Surface monitoring arrays suffer from significant variations of noise levels across receiver lines. When using a large monitoring array, we use a stacking technique to detect microseismic events through maximizing the signal-to-noise ratio (S/N) of the stack. But some receivers with a high noise level do not contribute to improving the S/N of the stack. We have derived a theoretical concept for the proper selection of receivers that best contribute to the stack for a constant strength of a signal across the array. This receiver selection criterion, based on the assumption of constant signal amplitude, provides a robust estimate of the noise threshold level, which could be used to discard or suppress contribution from the receivers that do not improve the S/N of the stack. We found that limiting the number of receivers for stacking improves the location accuracy and reduces the computational cost of data processing. Although the assumption of a constant signal never holds in real-life seismic applications, the noise level varies across the surface receivers in a significantly wider range than the signal amplitude. These noise variations can also increase the uncertainty of the source mechanism inversion and should be accounted for. Synthetic and field data examples show that weighted least-squares inversion with receiver weighting according to the noise level produces more accurate estimates for source mechanisms compared to the inversion that ignores information about noise.

Biomimetics ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. 1 ◽  
Author(s):  
Michelle Gutiérrez-Muñoz ◽  
Astryd González-Salazar ◽  
Marvin Coto-Jiménez

Speech signals are degraded in real-life environments, as a product of background noise or other factors. The processing of such signals for voice recognition and voice analysis systems presents important challenges. One of the conditions that make adverse quality difficult to handle in those systems is reverberation, produced by sound wave reflections that travel from the source to the microphone in multiple directions. To enhance signals in such adverse conditions, several deep learning-based methods have been proposed and proven to be effective. Recently, recurrent neural networks, especially those with long short-term memory (LSTM), have presented surprising results in tasks related to time-dependent processing of signals, such as speech. One of the most challenging aspects of LSTM networks is the high computational cost of the training procedure, which has limited extended experimentation in several cases. In this work, we present a proposal to evaluate the hybrid models of neural networks to learn different reverberation conditions without any previous information. The results show that some combinations of LSTM and perceptron layers produce good results in comparison to those from pure LSTM networks, given a fixed number of layers. The evaluation was made based on quality measurements of the signal’s spectrum, the training time of the networks, and statistical validation of results. In total, 120 artificial neural networks of eight different types were trained and compared. The results help to affirm the fact that hybrid networks represent an important solution for speech signal enhancement, given that reduction in training time is on the order of 30%, in processes that can normally take several days or weeks, depending on the amount of data. The results also present advantages in efficiency, but without a significant drop in quality.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2021 ◽  
Author(s):  
Shikha Suman ◽  
Ashutosh Karna ◽  
Karina Gibert

Hierarchical clustering is one of the most preferred choices to understand the underlying structure of a dataset and defining typologies, with multiple applications in real life. Among the existing clustering algorithms, the hierarchical family is one of the most popular, as it permits to understand the inner structure of the dataset and find the number of clusters as an output, unlike popular methods, like k-means. One can adjust the granularity of final clustering to the goals of the analysis themselves. The number of clusters in a hierarchical method relies on the analysis of the resulting dendrogram itself. Experts have criteria to visually inspect the dendrogram and determine the number of clusters. Finding automatic criteria to imitate experts in this task is still an open problem. But, dependence on the expert to cut the tree represents a limitation in real applications like the fields industry 4.0 and additive manufacturing. This paper analyses several cluster validity indexes in the context of determining the suitable number of clusters in hierarchical clustering. A new Cluster Validity Index (CVI) is proposed such that it properly catches the implicit criteria used by experts when analyzing dendrograms. The proposal has been applied on a range of datasets and validated against experts ground-truth overcoming the results obtained by the State of the Art and also significantly reduces the computational cost.


Author(s):  
Michelle Gutiérrez-Muñoz ◽  
Astryd González-Salazar ◽  
Marvin Coto-Jiménez

Speech signals are degraded in real-life environments, product of background noise or other factors. The processing of such signals for voice recognition and voice analysis systems presents important challenges. One of the conditions that make adverse quality difficult to handle in those systems is reverberation, produced by sound wave reflections that travel from the source to the microphone in multiple directions.To enhance signals in such adverse conditions, several deep learning-based methods have been proposed and proven to be effective. Recently, recurrent neural networks, especially those with long and short-term memory (LSTM), have presented surprising results in tasks related to time-dependent processing of signals, such as speech. One of the most challenging aspects of LSTM networks is the high computational cost of the training procedure, which has limited extended experimentation in several cases. In this work, we present a proposal to evaluate the hybrid models of neural networks to learn different reverberation conditions without any previous information. The results show that some combination of LSTM and perceptron layers produce good results in comparison to those from pure LSTM networks, given a fixed number of layers. The evaluation has been made based on quality measurements of the signal's spectrum, training time of the networks and statistical validation of results. Results help to affirm the fact that hybrid networks represent an important solution for speech signal enhancement, with advantages in efficiency, but without a significan drop in quality.


Recent applications of conventional iterative coordinate descent (ICD) algorithms to multislice helical CT reconstructions have shown that conventional ICD can greatly improve image quality by increasing resolution as well as reducing noise and some artifacts. However, high computational cost and long reconstruction times remain as a barrier to the use of conventional algorithm in the practical applications. Among the various iterative methods that have been studied for conventional, ICD has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a fast model-based iterative reconstruction algorithm using spatially nonhomogeneous ICD (NH-ICD) optimization. The NH-ICD algorithm speeds up convergence by focusing computation where it is most needed. The NH-ICD algorithm has a mechanism that adaptively selects voxels for update. First, a voxel selection criterion VSC determines the voxels in greatest need of update. Then a voxel selection algorithm VSA selects the order of successive voxel updates based upon the need for repeated updates of some locations, while retaining characteristics for global convergence. In order to speed up each voxel update, we also propose a fast 3-D optimization algorithm that uses a quadratic substitute function to upper bound the local 3-D objective function, so that a closed form solution can be obtained rather than using a computationally expensive line search algorithm. The experimental results show that the proposed method accelerates the reconstructions by roughly a factor of three on average for typical 3-D multislice geometries.


2022 ◽  
Vol 7 (2) ◽  
pp. 2820-2839
Author(s):  
Saurabh L. Raikar ◽  
◽  
Dr. Rajesh S. Prabhu Gaonkar ◽  

<abstract> <p>Jaya algorithm is a highly effective recent metaheuristic technique. This article presents a simple, precise, and faster method to estimate stress strength reliability for a two-parameter, Weibull distribution with common scale parameters but different shape parameters. The three most widely used estimation methods, namely the maximum likelihood estimation, least squares, and weighted least squares have been used, and their comparative analysis in estimating reliability has been presented. The simulation studies are carried out with different parameters and sample sizes to validate the proposed methodology. The technique is also applied to real-life data to demonstrate its implementation. The results show that the proposed methodology's reliability estimates are close to the actual values and proceeds closer as the sample size increases for all estimation methods. Jaya algorithm with maximum likelihood estimation outperforms the other methods regarding the bias and mean squared error.</p> </abstract>


2020 ◽  
Vol 54 (2) ◽  
pp. 649-677 ◽  
Author(s):  
Abdul-Lateef Haji-Ali ◽  
Fabio Nobile ◽  
Raúl Tempone ◽  
Sören Wolfers

Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.


2015 ◽  
Vol 25 (3) ◽  
pp. 483-498 ◽  
Author(s):  
Maciej Smołka ◽  
Robert Schaefer ◽  
Maciej Paszyński ◽  
David Pardo ◽  
Julen Álvarez-Aramberri

Abstract The paper discusses the complex, agent-oriented hierarchic memetic strategy (HMS) dedicated to solving inverse parametric problems. The strategy goes beyond the idea of two-phase global optimization algorithms. The global search performed by a tree of dependent demes is dynamically alternated with local, steepest descent searches. The strategy offers exceptionally low computational costs, mainly because the direct solver accuracy (performed by the hp-adaptive finite element method) is dynamically adjusted for each inverse search step. The computational cost is further decreased by the strategy employed for solution inter-processing and fitness deterioration. The HMS efficiency is compared with the results of a standard evolutionary technique, as well as with the multi-start strategy on benchmarks that exhibit typical inverse problems’ difficulties. Finally, an HMS application to a real-life engineering problem leading to the identification of oil deposits by inverting magnetotelluric measurements is presented. The HMS applicability to the inversion of magnetotelluric data is also mathematically verified.


1992 ◽  
Vol 114 (4) ◽  
pp. 530-536 ◽  
Author(s):  
J. C. Klewicki ◽  
R. E. Falco ◽  
J. F. Foss

Time-resolved measurements of the spanwise vorticity component, ωz, are used to investigate the motions in the outer region of turbulent boundary layers. The measurements were taken in very thick zero pressure gradient boundary layers (Rθ = 1010, 2870, 4850) using a four wire probe. As a result of the large boundary layer thickness, at the outer region locations where the measurements were taken the wall-normal and spanwise dimensions of the probe ranged between 0.7 < Δy/η < 1.2 and 2.1 < Δz/η < 3.9, respectively, where η is the local Kolmogorov length. An analysis of vorticity based intermittency is presented near y/δ = 0.6 and 0.85 at each of the Reynolds numbers. The average intermittency is presented as a function of detector threshold level and position in the boundary layer. The spanwise vorticity signals were found to yield average intermittency values at least as large as previous intermittency studies using “surrogate” signals. The average intermittency results do not indicate a region of threshold independence. An analysis of ωz event durations conditioned on the signal amplitude was also performed. The results of this analysis indicate that for decreasing Rθ, regions of single-signed ωz increase in size relative to the boundary layer thickness, but decrease in size when normalized by inner variables.


Sign in / Sign up

Export Citation Format

Share Document