scholarly journals A revised image of the instrumental seismicity in the Lodi area (Po Plain, Italy)

Solid Earth ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 2021-2039
Author(s):  
Laura Peruzza ◽  
Alessandra Schibuola ◽  
Maria Adelaide Romano ◽  
Marco Garbin ◽  
Mariangela Guidarelli ◽  
...  

Abstract. We analysed the instrumental seismicity in a sector of the Po Plain (Italy) to define the baseline for seismic monitoring of a new underground gas storage plant that will use the depleted gas reservoir of Cornegliano Laudense, near Lodi. The target area – a square approximately 80 km × 80 km wide – is commonly considered aseismic. The analysed period, 1951–2019, includes all available instrumental data. We gathered the P- and S-phase readings collected by various agencies for more than 300 events, approximately located inside the target area. We processed the earthquakes uniformly, using absolute location algorithms and velocity models adopted by the regional and national monitoring networks. The relocated earthquake dataset depicts an image of weak and deep seismicity for this central sector of the Po Plain, which is quite different from the initial one derived from the existing earthquake catalogues. Within a distance of approximately 30 km from Lodi, earthquakes are extremely rare (on average 0.5 earthquakes per year, assuming a completeness magnitude Mc = 2.7 from the 1980s); only two weak events fall at less than 15 km distance from the reservoir in the whole period 1951–2019. The strongest events instrumentally recorded are related to the seismic sequence of Caviaga in 1951 that represent the first instrumental recordings for that area. Confirming the hypocentral depths recently proposed by Caciagli et al. (2015), the events are far from the gas reservoir; we suggest common tectonic stress of the main shock of 1951 and the M4.2 earthquake of 17 December 2020, based on the similarities in depth, location, and focal mechanism. While it is clear that the deep seismicity corresponds to the collision between the Northern Apennines and the Southern Alps, the characterization of the geological structures that generate earthquakes appears uncertain. Our results are a preliminary benchmark for the definition of seismogenic zones in the Lodi area, whose definition can be improved with the existing observational capabilities now available in the surroundings.

2021 ◽  
Author(s):  
Laura Peruzza ◽  
Alessandra Schibuola ◽  
Maria Adelaide Romano ◽  
Marco Garbin ◽  
Mariangela Guidarelli ◽  
...  

Abstract. We analyse the instrumental seismicity in a sector of the Po Plain (Italy) with the aim of defining the baseline for seismic monitoring of a new underground gas storage plant that will use the depleted gas reservoir of Cornegliano Laudense, near Lodi. The target area – a square approximately 80 x 80 km wide – is commonly considered aseismic. The analysed period, 1951–2019, includes all available instrumental data. We gathered the P- and S-phase readings collected by various agencies for more than 300 events, approximately located inside the target area. We processed the earthquakes in a uniform way, using absolute location algorithms and velocity models adopted by the regional and national monitoring networks. The relocated earthquake dataset depicts an image of weak and deep seismicity for this central sector of the Po Plain, which is quite different from the initial one derived from the existing earthquake catalogues. Within a distance of approximately 30 km from Lodi, earthquakes are extremely rare (on average 0.5 earthquake/yr, assuming a completeness magnitude Mc = 2.7 from the 1980s); only 2 weak events fall at less than 15 km distance from the reservoir in the whole period 1951–2019. The strongest events instrumentally recorded are related to the seismic sequence of Caviaga in 1951 that represent the first instrumental recordings for that area. Confirming the hypocentral depths recently proposed by Caciagli et al., 2015, the events are far from the gas reservoir; we suggest a common tectonic stress of the main shock of 1951 and the M4.2 earthquake of Dec 17, 2020, on the basis of the similarities in depth, location and focal mechanism. While it is clear that the deep seismicity corresponds to the collision between the Northern Apennine and Southern Alps, it is much less clear, however, which geological structures are capable of generating earthquakes. Our results and the improvement in the observational capabilities of the very last years will help refining the seismogenic sources hypothesized for this area.


2021 ◽  
Author(s):  
Jeremy Pesicek ◽  
Trond Ryberg ◽  
Roger Machacca ◽  
Jaime Raigosa

<p>Earthquake location is a primary function of volcano observatories worldwide and the resulting catalogs of seismicity are integral to interpretations and forecasts of volcanic activity.  Ensuring earthquake location accuracy is therefore of critical importance.  However, accurate earthquake locations require accurate velocity models, which are not always available.  In addition, difficulties involved in applying traditional velocity modeling methods often mean that earthquake locations are computed at volcanoes using velocity models not specific to the local volcano.   </p><p>Traditional linearized methods that jointly invert for earthquake locations, velocity structure, and station corrections depend critically on having reasonable starting values for the unknown parameters, which are then iteratively updated to minimize the data misfit.  However, these deterministic methods are susceptible to local minima and divergence, issues exacerbated by sparse seismic networks and/or poor data quality common at volcanoes.  In cases where independent prior constraints on local velocity structure are not available, these methods may result in systematic errors in velocity models and hypocenters, especially if the full range of possible starting values is not explored.  Furthermore, such solutions depend on subjective choices for model regularization and parameterization.</p><p>In contrast, Bayesian methods promise to avoid all these pitfalls.  Although these methods traditionally have been difficult to implement due to additional computational burdens, the increasing use and availability of High-Performance Computing resources mean widespread application of these methods is no longer prohibitively expensive.  In this presentation, we apply a Bayesian, hierarchical, trans-dimensional Markov chain Monte Carlo method to jointly solve for hypocentral parameters, 1D velocity structure, and station corrections using data from monitoring networks of varying quality at several volcanoes in the U.S. and South America.  We compare the results with those from a more traditional deterministic approach and show that the resulting velocity models produce more accurate earthquake locations.  Finally, we chart a path forward for more widespread adoption of the Bayesian approach, which may improve catalogs of volcanic seismicity at observatories worldwide. </p>


2020 ◽  
Author(s):  
Gina-Maria Geffers ◽  
Ian Main ◽  
Mark Naylor

<p>The Gutenberg-Richter (GR) b-value represents the relative proportion of small to large earthquakes in a scale-free population. For tectonic seismicity, this is often close to unity, but some studies have shown the b-value to be elevated (>1) in both volcanic and induced seismicity. However, many of these studies have used relatively small datasets – in sample size and magnitude range, easily introducing biases. This leads to incomplete catalogues above the threshold above which all events are assumed to be recorded – the completeness magnitude M<sub>c</sub>. At high magnitudes, the scale-free behaviour must break down because natural tectonic and volcano-tectonic processes are incapable of an infinite release of energy, which is difficult to estimate accurately. In particular, it can be challenging to distinguish between regions of unlimited scale-free behaviour and physical roll-off at larger magnitudes. The latter model is often referred to as the modified Gutenberg-Richter (MGR) distribution.</p><p>We use the MGR distribution to describe the breakdown of scale-free behaviour at large magnitudes, introducing the roll-off parameter (θ) to the incremental distribution. Applying a maximum likelihood method to estimate the b-value could violate the implicit assumption that the underlying model is GR. If this is the case, the methods used will return a biased b-value rather than indicate that the method used is inappropriate for the underlying model. Using synthetic data and testing it on various earthquake catalogues, we show that when we have little data and low bandwidth, it is statistically challenging to test whether the sample is representative of the scale-free GR behaviour or whether it is controlled primarily by the finite size roll-off seen in MGR.</p>


2007 ◽  
Vol 10 (1) ◽  
pp. 117-128 ◽  
Author(s):  
David G. Delaney ◽  
Corinne D. Sperling ◽  
Christiaan S. Adams ◽  
Brian Leung

2019 ◽  
Vol 38 (3) ◽  
pp. 226-231 ◽  
Author(s):  
Andreas Wuestefeld ◽  
Matt Wilks

The success of a distributed acoustic sensing (DAS) survey depends on strain energy impeding at favorable angles at most sections of the fiber. Although constrained to the path of the wellbore, there are various design parameters that can influence the recorded DAS amplitude. We present here a method to model the performance of DAS installations. We use precise raypath modeling in complex velocity models to determine ray incidence angles and show variations between different wrapping angles and detection thresholds. We then propose a way to evaluate the performance of the DAS acquisition design, and how to optimize processing, based on the percentage of DAS channels above a chosen amplitude threshold. For microseismic studies, the best wrapping angle of the fiber can be determined, which may be defined as covering the target area most homogeneously. For vertical seismic profiling projects, surface shot positions can be evaluated for their predicted recorded energy.


Author(s):  
Jikai Sun ◽  
Fumiaki Nagashima ◽  
Hiroshi Kawase ◽  
Shinichi Matsushima ◽  
Baoyintu

AbstractMost of the buildings damaged by the mainshock of the 2016 Kumamoto earthquake were concentrated in downtown Mashiki in Kumamoto Prefecture, Japan. We obtained 1D subsurface velocity structures at 535 grid points covering this area based on 57 identified velocity models, used the linear and equivalent linear analyses to obtain site-specific ground motions, and generated detailed distribution maps of the peak ground acceleration and velocity in Mashiki. We determined the construction period of every individual building in the target area corresponding to updates to the Japanese building codes. Finally, we estimated the damage probability by the nonlinear response model of wooden structures with different ages. The distribution map of the estimated damage probabilities was similar to the map of the damage ratios from a field survey, and moderate damage was estimated in the northwest where no damage survey was conducted. We found that both the detailed site amplification and the construction period of wooden houses are important factors for evaluating the seismic risk of wooden structures.


2017 ◽  
Vol 60 (2) ◽  
Author(s):  
Simona Carannante ◽  
Ezio D'Alema ◽  
Sara Lovati ◽  
Marco Massa ◽  
Paolo Augliera ◽  
...  

Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB119-WB126 ◽  
Author(s):  
Elive Menyoli ◽  
Shengwen Jin ◽  
Shiyong Xu ◽  
Stuart Graber

Marine wide-azimuth data in the Gulf of Mexico, reverse time migration (RTM) and anisotropic velocity models have led to significant improvement in subsalt imaging. However, imaging of some steeply dipping subsalt targets such as three-way closures against salt is still difficult. This can be attributed to poor illumination and noise contaminations from various shot records. We apply the visibility analysis method that quantitatively determines which shot records contribute most energy on a specific subsalt prospect area. As a result we selectively migrate only those shot records thereby reducing noise contamination from low energy contributing shot records, improving signal continuity and better trap definition in the target area. Like conventional illumination analysis, the computation takes into account the overburden velocity distribution, acquisition geometry, target reflectivity and dip angle. We used 2D and 3D synthetic data examples to test the concepts and applicability of the method. A Gulf of Mexico case study example using wide-azimuth data demonstrated its use in an industry scale project. It is shown that for the particular 60°–65° subsalt target of interest only 30% of the wide-azimuth shot records are sufficient for the imaging. By reducing noise, the image results show significant improvement in the subsalt area compared to the full shot record RTM volume.


Author(s):  
Nguyen Viet Ky ◽  
Tran Thi Phi Oanh ◽  
Ho Chi Thong ◽  
Nguyen Dinh Tu

Underground water pollution, especially groundwater of the Pleistocene layers has been recognized by many researchers. These records are often based on the results of water quality monitoring of different monitoring networks: the National Monitoring Network, the monitoring network of the Natural Resources and Environment Department of Ho Chi Minh city, the monitoring network of the Saigon Water Supply Company. The records show that in the underground water of the Pleistocene layers appeared metals such as copper, lead (Pb), zinc, arsenic, cadmium, manganese (Mn), aluminum (Al), nickel, mercury... However, the content of many metals has not reached the limit of pollution. In this study, the authors used the monitoring results of the National Monitoring Network for the period 2000 - 2016 and focused on metals such as Al, Mn and Pb in the water of Pleistocene aquifers - It has already exceeded the allowable standards at some monitoring sites. The results show that the content of Mn and Al metals in the Pleistocene aquifers varies significantly between 2009 and 2013, while for Pb - a sharp increase from 2013 to 1016. Causes leading to the development of Al and Mn pollution, mainly due to geological, hydro-geological conditions and impacts caused by heavy groundwater exploitation in the Pleistocene aquifers.


Sign in / Sign up

Export Citation Format

Share Document