scholarly journals Synthetic seismicity distribution in Guerrero–Oaxaca subduction zone, Mexico, and its implications on the role of asperities in Gutenberg–Richter law

2020 ◽  
Vol 13 (12) ◽  
pp. 6361-6381
Author(s):  
Marisol Monterrubio-Velasco ◽  
F. Ramón Zúñiga ◽  
Quetzalcoatl Rodríguez-Pérez ◽  
Otilio Rojas ◽  
Armando Aguilar-Meléndez ◽  
...  

Abstract. Seismicity and magnitude distributions are fundamental for seismic hazard analysis. The Mexican subduction margin along the Pacific Coast is one of the most active seismic zones in the world, which makes it an optimal region for observation and experimentation analyses. Some remarkable seismicity features have been observed on a subvolume of this subduction region, suggesting that the observed simplicity of earthquake sources arises from the rupturing of single asperities. This subregion has been named SUB3 in a recent seismotectonic regionalization of Mexico. In this work, we numerically test this hypothesis using the TREMOL (sThochastic Rupture Earthquake MOdeL) v0.1.0 code. As test cases, we choose four of the most significant recent events (6.5 < Mw < 7.8) that occurred in the Guerrero–Oaxaca region (SUB3) during the period 1988–2018, and whose associated seismic histories are well recorded in the regional catalogs. Synthetic seismicity results show a reasonable fit to the real data, which improves when the available data from the real events increase. These results give support to the hypothesis that single-asperity ruptures are a distinctive feature that controls seismicity in SUB3. Moreover, a fault aspect ratio sensitivity analysis is carried out to study how the synthetic seismicity varies. Our results indicate that asperity shape is an important modeling parameter controlling the frequency–magnitude distribution of synthetic data. Therefore, TREMOL provides appropriate means to model complex seismicity curves, such as those observed in the SUB3 region, and highlights its usefulness as a tool to shed additional light on the earthquake process.

2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


Geophysics ◽  
1990 ◽  
Vol 55 (9) ◽  
pp. 1166-1182 ◽  
Author(s):  
Irshad R. Mufti

Finite‐difference seismic models are commonly set up in 2-D space. Such models must be excited by a line source which leads to different amplitudes than those in the real data commonly generated from a point source. Moreover, there is no provision for any out‐of‐plane events. These problems can be eliminated by using 3-D finite‐difference models. The fundamental strategy in designing efficient 3-D models is to minimize computational work without sacrificing accuracy. This was accomplished by using a (4,2) differencing operator which ensures the accuracy of much larger operators but requires many fewer numerical operations as well as significantly reduced manipulation of data in the computer memory. Such a choice also simplifies the problem of evaluating the wave field near the subsurface boundaries of the model where large operators cannot be used. We also exploited the fact that, unlike the real data, the synthetic data are free from ambient noise; consequently, one can retain sufficient resolution in the results by optimizing the frequency content of the source signal. Further computational efficiency was achieved by using the concept of the exploding reflector which yields zero‐offset seismic sections without the need to evaluate the wave field for individual shot locations. These considerations opened up the possibility of carrying out a complete synthetic 3-D survey on a supercomputer to investigate the seismic response of a large‐scale structure located in Oklahoma. The analysis of results done on a geophysical workstation provides new insight regarding the role of interference and diffraction in the interpretation of seismic data.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260308
Author(s):  
Mauro Castelli ◽  
Luca Manzoni ◽  
Tatiane Espindola ◽  
Aleš Popovič ◽  
Andrea De Lorenzo

Wireless networks are among the fundamental technologies used to connect people. Considering the constant advancements in the field, telecommunication operators must guarantee a high-quality service to keep their customer portfolio. To ensure this high-quality service, it is common to establish partnerships with specialized technology companies that deliver software services in order to monitor the networks and identify faults and respective solutions. A common barrier faced by these specialized companies is the lack of data to develop and test their products. This paper investigates the use of generative adversarial networks (GANs), which are state-of-the-art generative models, for generating synthetic telecommunication data related to Wi-Fi signal quality. We developed, trained, and compared two of the most used GAN architectures: the Vanilla GAN and the Wasserstein GAN (WGAN). Both models presented satisfactory results and were able to generate synthetic data similar to the real ones. In particular, the distribution of the synthetic data overlaps the distribution of the real data for all of the considered features. Moreover, the considered generative models can reproduce the same associations observed for the synthetic features. We chose the WGAN as the final model, but both models are suitable for addressing the problem at hand.


2018 ◽  
Vol 8 (1) ◽  
pp. 9 ◽  
Author(s):  
Wenzhong Shi ◽  
Wael Ahmed ◽  
Na Li ◽  
Wenzheng Fan ◽  
Haodong Xiang ◽  
...  

A method capable of automatically reconstructing 3D building models with semantic information from the unstructured 3D point cloud of indoor scenes is presented in this paper. This method has three main steps: 3D segmentation using a new hybrid algorithm, room layout reconstruction, and wall-surface object reconstruction by using an enriched approach. Unlike existing methods, this method aims to detect, cluster, and model complex structures without having prior scanner or trajectory information. In addition, this method enables the accurate detection of wall-surface “defacements”, such as windows, doors, and virtual openings. In addition to the detection of wall-surface apertures, the detection of closed objects, such as doors, is also possible. Hence, for the first time, the whole 3D modelling process of the indoor scene from a backpack laser scanner (BLS) dataset was achieved and is recorded for the first time. This novel method was validated using both synthetic data and real data acquired by a developed BLS system for indoor scenes. Evaluating our approach on synthetic datasets achieved a precision of around 94% and a recall of around 97%, while for BLS datasets our approach achieved a precision of around 95% and a recall of around 89%. The results reveal this novel method to be robust and accurate for 3D indoor modelling.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. G211-G223 ◽  
Author(s):  
Lasse Amundsen ◽  
Lars Løseth ◽  
Rune Mittet ◽  
Svein Ellingsrud ◽  
Bjørn Ursin

This paper gives a unified treatment of electromagnetic (EM) field decomposition into upgoing and downgoing components for conductive and nonconductive media, where the electromagnetic data are measured on a plane in which the electric permittivity, magnetic permeability, and electrical conductivity are known constants with respect to space and time. Above and below the plane of measurement, the medium can be arbitrarily inhomogeneous and anisotropic. In particular, the proposed decomposition theory applies to marine EM, low-frequency data acquired for hydrocarbon mapping where the upgoing components of the recorded field guided and refracted from the reservoir, that are of interest for the interpretation. The direct-source field, the refracted airwave induced by the source, the reflected field from the sea surface, and mostmagnetotelluric noise traveling downward just below the seabed are field components that are considered to be noise in electromagnetic measurements. The viability and validity of the decomposition method is demonstrated using modeled and real marine EM data, also termed seabed logging (SBL) data. The synthetic data are simulated in a model that is fairly representative of the geologic area where the real SBL were collected. The results from the synthetic data study therefore are used to assist in the interpretation of the real data from an area with [Formula: see text] water depth above a known gas province offshore Norway. The effect of the airwave is seen clearly in measured data. After field decomposition just below the seabed, the upgoing component of the recorded electric field has almost linear phase, indicating that most of the effect of the airwave component has been removed.


2020 ◽  
Vol 12 (5) ◽  
pp. 771 ◽  
Author(s):  
Miguel Angel Ortíz-Barrios ◽  
Ian Cleland ◽  
Chris Nugent ◽  
Pablo Pancardo ◽  
Eric Järpe ◽  
...  

Automatic detection and recognition of Activities of Daily Living (ADL) are crucial for providing effective care to frail older adults living alone. A step forward in addressing this challenge is the deployment of smart home sensors capturing the intrinsic nature of ADLs performed by these people. As the real-life scenario is characterized by a comprehensive range of ADLs and smart home layouts, deviations are expected in the number of sensor events per activity (SEPA), a variable often used for training activity recognition models. Such models, however, rely on the availability of suitable and representative data collection and is habitually expensive and resource-intensive. Simulation tools are an alternative for tackling these barriers; nonetheless, an ongoing challenge is their ability to generate synthetic data representing the real SEPA. Hence, this paper proposes the use of Poisson regression modelling for transforming simulated data in a better approximation of real SEPA. First, synthetic and real data were compared to verify the equivalence hypothesis. Then, several Poisson regression models were formulated for estimating real SEPA using simulated data. The outcomes revealed that real SEPA can be better approximated ( R pred 2 = 92.72 % ) if synthetic data is post-processed through Poisson regression incorporating dummy variables.


2021 ◽  
Vol 13 (22) ◽  
pp. 4713
Author(s):  
Jean-Emmanuel Deschaud ◽  
David Duque ◽  
Jean Pierre Richa ◽  
Santiago Velasco-Forero ◽  
Beatriz Marcotegui ◽  
...  

Paris-CARLA-3D is a dataset of several dense colored point clouds of outdoor environments built by a mobile LiDAR and camera system. The data are composed of two sets with synthetic data from the open source CARLA simulator (700 million points) and real data acquired in the city of Paris (60 million points), hence the name Paris-CARLA-3D. One of the advantages of this dataset is to have simulated the same LiDAR and camera platform in the open source CARLA simulator as the one used to produce the real data. In addition, manual annotation of the classes using the semantic tags of CARLA was performed on the real data, allowing the testing of transfer methods from the synthetic to the real data. The objective of this dataset is to provide a challenging dataset to evaluate and improve methods on difficult vision tasks for the 3D mapping of outdoor environments: semantic segmentation, instance segmentation, and scene completion. For each task, we describe the evaluation protocol as well as the experiments carried out to establish a baseline.


2019 ◽  
Vol 214 ◽  
pp. 06003 ◽  
Author(s):  
Kamil Deja ◽  
Tomasz Trzcin´ski ◽  
Łukasz Graczykowski

Simulating the detector response is a key component of every highenergy physics experiment. The methods used currently for this purpose provide high-fidelity results. However, this precision comes at a price of a high computational cost. In this work, we introduce our research aiming at fast generation of the possible responses of detector clusters to particle collisions. We present the results for the real-life example of the Time Projection Chamber in the ALICE experiment at CERN. The essential component of our solution is a generative model that allows to simulate synthetic data points that bear high similarity to the real data. Leveraging recent advancements in machine learning, we propose to use conditional Generative Adversarial Networks. In this work we present a method to simulate data samples possible to record in the detector based on the initial information about particles. We propose and evaluate several models based on convolutional or recursive networks. The main advantage offered by the proposed method is a significant speed-up in the execution time, reaching up to the factor of 102 with respect to the currently used simulation tool. Nevertheless, this speed-up comes at a price of a lower simulation quality. In this work we adapt available methods and show their quantitative and qualitative limitations.


JAMIA Open ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Khaled El Emam ◽  
Lucy Mosquera ◽  
Elizabeth Jonker ◽  
Harpreet Sood

Abstract Background Concerns about patient privacy have limited access to COVID-19 datasets. Data synthesis is one approach for making such data broadly available to the research community in a privacy protective manner. Objectives Evaluate the utility of synthetic data by comparing analysis results between real and synthetic data. Methods A gradient boosted classification tree was built to predict death using Ontario’s 90 514 COVID-19 case records linked with community comorbidity, demographic, and socioeconomic characteristics. Model accuracy and relationships were evaluated, as well as privacy risks. The same model was developed on a synthesized dataset and compared to one from the original data. Results The AUROC and AUPRC for the real data model were 0.945 [95% confidence interval (CI), 0.941–0.948] and 0.34 (95% CI, 0.313–0.368), respectively. The synthetic data model had AUROC and AUPRC of 0.94 (95% CI, 0.936–0.944) and 0.313 (95% CI, 0.286–0.342) with confidence interval overlap of 45.05% and 52.02% when compared with the real data. The most important predictors of death for the real and synthetic models were in descending order: age, days since January 1, 2020, type of exposure, and gender. The functional relationships were similar between the two data sets. Attribute disclosure risks were 0.0585, and membership disclosure risk was low. Conclusions This synthetic dataset could be used as a proxy for the real dataset.


Sign in / Sign up

Export Citation Format

Share Document