scholarly journals Development of a Global Spatio-Temporal Seismicity Model and Its Application to the Vrancea Seismic Zone, Romania

2021 ◽  
Author(s):  
◽  
Nastasja Anais Scholz

<p>This study investigates the temporal behaviour of major earthquakes in the Vrancea Seismic Zone (VSZ)in Romania. I used the Romplus catalogue, which is a compilation of several sources and spans the time from 984 AD to the year 2005 and in which the data are of different quality. This catalogue contains only Vrancean earthquakes and consists of more than 8000 events. Qualities 'A', 'B' and 'C' were used to model the data. 'D' and '=' were found as too unreliable for modeling. Using the b-value, I concluded that 3.5 is the correct cut-off magnitude for earthquakes after 1980 and at depths of 60 km and greater. Thereby I detected an increase in the b-value after 1986 of about 0.2 units. The reason for this increase could not be found. Plotting the Gutenberg-Richter relation for several time and depth intervals, it was found that at larger depths than 60 km, there are too many M7 earthquakes as compared to small shocks. The shape of the Gutenberg-Richter relation is similar as to the one expected by the characteristic earthquake model (Schwarz and Coppersmith, 1984; Wesnousky, 1994). A strike of 53 degree was found and the earthquake coordinates were rotated correspondingly. The resulting view on the slab showed the confined volume in which the earthquakes happen and well as the 'aseismic part' of the slab between 40 km and 60 km of depth. The seismicity seems to reach a depth of 180 km. Only the earthquakes in the slab, below a depth of 60 km, show clustering behaviour. Furthermore, the M7 earthquakes all happened in the slab. Thus, a depth limit of 60 km was introduced for modeling. In order to find aftershocks in the catalogue, the temporal behaviour of the Vrancea earthquakes was examined. The mean magnitude increases after each major earthquake, indicating an aftershock process. This was confirmed by the rate of occurrence, which showed an increase in rate after the 1990 earthquakes. The rate of occurrence is too low for the first 580 days after 1980, possibly due to insufficient earthquake detection in this period of time. All the damaging M7 earthquakes all happened in the slab. Thus, shallow earthquakes had to be considered separately. A depth limit of 60 km was introduced and earthquake in shallower and deeper depths were considered separately. For the shallow earthquakes there was a sharp increase in the apparent b-value below the cut-off magnitude of 3.5. After reaching a value of 2.4, the b-value starts to fall steeply. This was attributed to biases in the magnitude calculation. I used the rounded value of 3.5 as a cut-off magnitude for the shallow earthquakes. Having found the magnitude cut-off, depth and time limit, modeling could be started. The model gives two important parameters: the proportion of aftershock and the time to the next earthquake. Using the Maximum Likelihood Method, a best fit was found for a data set starting at 1980 and consisting of earthquakes with a cut-off magnitude of 3.5 and a depth equal and greater than 60 km. According to the model, this data set consists of 13 plus or minus 5% aftershocks and has an inter-event time for new earthquakes of 13 plus or minus 1 days. Using several cut-off magnitudes, it was found that the calculated inter-event time for these earthquakes is consistent with the Gutenberg-Richter law. In contrast, the predicted value for the interevent time of M7 earthquakes does not match the one found in the catalogue. While the Maximum Likelihood Method leads to 814 years as recurrence time, the data shows a recurrence time of only 23 years. The model fits the data set of the 1990 aftershocks very well, too, leading to a aftershock proportion of 58 plus or minus 15%. The data set for the 1986 did not lead to good results, probably due to missing aftershocks shortly after the main shock. Comparing model and data with a pure Poisson model I could see that earthquakes tend to cluster in the first days after the major event. Several days later, their behaviour changes and then is similar to the one proposed by the seismic gap model. Looking at the ratio between the probabilities of the model of Smith and Christophersen and of the Poisson model, a clustering behaviour in the first 24 hours after the main shock was found, followed by a decreased seismicity, which reverts to be Poissonian after 100 days. Thus, I concluded that aftershock behaviour is only relevant after the first 24 hours following a major earthquake. After 24 hours, seismic hazard decreases to be less than as expected by the Poisson model in the following 100 days, until seismicity returns to be Poissonian again. Additionally, I suggest that the 1990 earthquake and its aftershocks should be considered as a 'model earthquake' for future earthquakes as it seems to be representative for earthquake behaviour in the VSZ.</p>

2021 ◽  
Author(s):  
◽  
Nastasja Anais Scholz

<p>This study investigates the temporal behaviour of major earthquakes in the Vrancea Seismic Zone (VSZ)in Romania. I used the Romplus catalogue, which is a compilation of several sources and spans the time from 984 AD to the year 2005 and in which the data are of different quality. This catalogue contains only Vrancean earthquakes and consists of more than 8000 events. Qualities 'A', 'B' and 'C' were used to model the data. 'D' and '=' were found as too unreliable for modeling. Using the b-value, I concluded that 3.5 is the correct cut-off magnitude for earthquakes after 1980 and at depths of 60 km and greater. Thereby I detected an increase in the b-value after 1986 of about 0.2 units. The reason for this increase could not be found. Plotting the Gutenberg-Richter relation for several time and depth intervals, it was found that at larger depths than 60 km, there are too many M7 earthquakes as compared to small shocks. The shape of the Gutenberg-Richter relation is similar as to the one expected by the characteristic earthquake model (Schwarz and Coppersmith, 1984; Wesnousky, 1994). A strike of 53 degree was found and the earthquake coordinates were rotated correspondingly. The resulting view on the slab showed the confined volume in which the earthquakes happen and well as the 'aseismic part' of the slab between 40 km and 60 km of depth. The seismicity seems to reach a depth of 180 km. Only the earthquakes in the slab, below a depth of 60 km, show clustering behaviour. Furthermore, the M7 earthquakes all happened in the slab. Thus, a depth limit of 60 km was introduced for modeling. In order to find aftershocks in the catalogue, the temporal behaviour of the Vrancea earthquakes was examined. The mean magnitude increases after each major earthquake, indicating an aftershock process. This was confirmed by the rate of occurrence, which showed an increase in rate after the 1990 earthquakes. The rate of occurrence is too low for the first 580 days after 1980, possibly due to insufficient earthquake detection in this period of time. All the damaging M7 earthquakes all happened in the slab. Thus, shallow earthquakes had to be considered separately. A depth limit of 60 km was introduced and earthquake in shallower and deeper depths were considered separately. For the shallow earthquakes there was a sharp increase in the apparent b-value below the cut-off magnitude of 3.5. After reaching a value of 2.4, the b-value starts to fall steeply. This was attributed to biases in the magnitude calculation. I used the rounded value of 3.5 as a cut-off magnitude for the shallow earthquakes. Having found the magnitude cut-off, depth and time limit, modeling could be started. The model gives two important parameters: the proportion of aftershock and the time to the next earthquake. Using the Maximum Likelihood Method, a best fit was found for a data set starting at 1980 and consisting of earthquakes with a cut-off magnitude of 3.5 and a depth equal and greater than 60 km. According to the model, this data set consists of 13 plus or minus 5% aftershocks and has an inter-event time for new earthquakes of 13 plus or minus 1 days. Using several cut-off magnitudes, it was found that the calculated inter-event time for these earthquakes is consistent with the Gutenberg-Richter law. In contrast, the predicted value for the interevent time of M7 earthquakes does not match the one found in the catalogue. While the Maximum Likelihood Method leads to 814 years as recurrence time, the data shows a recurrence time of only 23 years. The model fits the data set of the 1990 aftershocks very well, too, leading to a aftershock proportion of 58 plus or minus 15%. The data set for the 1986 did not lead to good results, probably due to missing aftershocks shortly after the main shock. Comparing model and data with a pure Poisson model I could see that earthquakes tend to cluster in the first days after the major event. Several days later, their behaviour changes and then is similar to the one proposed by the seismic gap model. Looking at the ratio between the probabilities of the model of Smith and Christophersen and of the Poisson model, a clustering behaviour in the first 24 hours after the main shock was found, followed by a decreased seismicity, which reverts to be Poissonian after 100 days. Thus, I concluded that aftershock behaviour is only relevant after the first 24 hours following a major earthquake. After 24 hours, seismic hazard decreases to be less than as expected by the Poisson model in the following 100 days, until seismicity returns to be Poissonian again. Additionally, I suggest that the 1990 earthquake and its aftershocks should be considered as a 'model earthquake' for future earthquakes as it seems to be representative for earthquake behaviour in the VSZ.</p>


2021 ◽  
Author(s):  
Marie Beisemann

Several psychometric tests generate count data, e.g. the number of ideas in divergent thinkingtasks. The most prominent count data IRT model, the Rasch Poisson Counts Model (RPCM)assumes constant discriminations across items as well as the equidispersion assumption of thePoisson distribution (i.e., E(X) = Var(X)), considerably limiting modeling flexibility. Violationsof these assumptions are associated with impaired ability, reliability, and standard error estimates.Models have been proposed to loose the one or the other assumption. The Two-Parameter PoissonCounts Model (2PPCM) allows varying discriminations but retains the equidispersion assumption.The Conway-Maxwell-Poisson Counts Model (CMPCM) that allows for modeling equi- but alsoover- and underdispersion (more or less variance than implied by the mean under the Poisson distribution)but assumes constant discriminations. The present work introduces the Two-ParameterConway-Maxwell-Poisson (2PCMP) model which generalizes the RPCM, the 2PPCM, and the CMPCM(all contained as special cases) to allow for varying discriminations and dispersions withinone model. A marginal maximum likelihood method based on a fixed quadrature Expectation-Maximization (EM) algorithm is derived. Standard errors as well as two methods for latent abilityestimation are provided. An implementation of the 2PCMP model in R and C++ is provided. Twosimulation studies examine the model’s statistical properties and compare the 2PCMP model toestablished methods. Data from divergent thinking tasks are re-analyzed with the 2PCMP modelto illustrate the model’s flexibility and ability to test assumptions of special cases.


2020 ◽  
Vol 15 (S359) ◽  
pp. 173-174
Author(s):  
A. Cortesi ◽  
L. Coccato ◽  
M. L. Buzzo ◽  
K. Menéndez-Delmestre ◽  
T. Goncalves ◽  
...  

AbstractWe present the latest data release of the Planetary Nebulae Spectrograph Survey (PNS) of ten lenticular galaxies and two spiral galaxies. With this data set we are able to recover the galaxies’ kinematics out to several effective radii. We use a maximum likelihood method to decompose the disk and spheroid kinematics and we compare it with the kinematics of spiral and elliptical galaxies. We build the Tully- Fisher (TF) relation for these galaxies and we compare with data from the literature and simulations. We find that the disks of lenticular galaxies are hotter than the disks of spiral galaxies at low redshifts, but still dominated by rotation velocity. The mechanism responsible for the formation of these lenticular galaxies is neither major mergers, nor a gentle quenching driven by stripping or Active Galactic Nuclei (AGN) feedback.


2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1815
Author(s):  
Diego I. Gallardo ◽  
Mário de Castro ◽  
Héctor W. Gómez

A cure rate model under the competing risks setup is proposed. For the number of competing causes related to the occurrence of the event of interest, we posit the one-parameter Bell distribution, which accommodates overdispersed counts. The model is parameterized in the cure rate, which is linked to covariates. Parameter estimation is based on the maximum likelihood method. Estimates are computed via the EM algorithm. In order to compare different models, a selection criterion for non-nested models is implemented. Results from simulation studies indicate that the estimation method and the model selection criterion have a good performance. A dataset on melanoma is analyzed using the proposed model as well as some models from the literature.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Ryuho Kataoka

Abstract Statistical distributions are investigated for magnetic storms, sudden commencements (SCs), and substorms to identify the possible amplitude of the one in 100-year and 1000-year events from a limited data set of less than 100 years. The lists of magnetic storms and SCs are provided from Kakioka Magnetic Observatory, while the lists of substorms are obtained from SuperMAG. It is found that majorities of events essentially follow the log-normal distribution, as expected from the random output from a complex system. However, it is uncertain that large-amplitude events follow the same log-normal distributions, and rather follow the power-law distributions. Based on the statistical distributions, the probable amplitudes of the 100-year (1000-year) events can be estimated for magnetic storms, SCs, and substorms as approximately 750 nT (1100 nT), 230 nT (450 nT), and 5000 nT (6200 nT), respectively. The possible origin to cause the statistical distributions is also discussed, consulting the other space weather phenomena such as solar flares, coronal mass ejections, and solar energetic particles.


2012 ◽  
Vol 57 (1) ◽  
Author(s):  
SEYED EHSAN SAFFAR ◽  
ROBIAH ADNAN ◽  
WILLIAM GREENE

A Poisson model typically is assumed for count data. In many cases, there are many zeros in the dependent variable and because of these many zeros, the mean and the variance values of the dependent variable are not the same as before. In fact, the variance value of the dependent variable will be much more than the mean value of the dependent variable and this is called over–dispersion. Therefore, Poisson model is not suitable anymore for this kind of data because of too many zeros. Thus, it is suggested to use a hurdle Poisson regression model to overcome over–dispersion problem. Furthermore, the response variable in such cases is censored for some values. In this paper, a censored hurdle Poisson regression model is introduced on count data with many zeros. In this model, we consider a response variable and one or more than one explanatory variables. The estimation of regression parameters using the maximum likelihood method is discussed and the goodness–of–fit for the regression model is examined. We study the effects of right censoring on estimated parameters and their standard errors via an example.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
K. S. Sultan ◽  
A. S. Al-Moisheer

We discuss the two-component mixture of the inverse Weibull and lognormal distributions (MIWLND) as a lifetime model. First, we discuss the properties of the proposed model including the reliability and hazard functions. Next, we discuss the estimation of model parameters by using the maximum likelihood method (MLEs). We also derive expressions for the elements of the Fisher information matrix. Next, we demonstrate the usefulness of the proposed model by fitting it to a real data set. Finally, we draw some concluding remarks.


1980 ◽  
Vol 70 (1) ◽  
pp. 223-241
Author(s):  
Larry Gedney ◽  
Steve Estes ◽  
Nirendra Biswas

abstract Since a series of moderate earthquakes near Fairbanks, Alaska in 1967, the “Fairbanks seismic zone” has maintained a consistently high level of seismicity interspersed with sporadic earthquake swarms. Five swarms occurring since 1970 demonstrate that tightly compacted centers of activity have tended to migrate away from the epicentral area of the 1967 earthquakes. Comparative b-coefficients of the first four swarms indicate that they occurred under different relative stress conditions than the last episode, which exhibited a higher b-value and was, in fact, a main shock of magnitude 4.6 with a rapidly decaying aftershock sequence. This last recorded sequence in February 1979 was an extension to greater depths along a lineal seismic zone whose first recorded activation occurred during a swarm two years earlier. Focal mechanism solutions indicate a north-south orientation of the greatest principal stress axis, σ1, in the area. A dislocation process related to crustal spreading between strands of a right-lateral fault, similar to that which has been inferred for southern California, is suggested.


1983 ◽  
Vol 73 (3) ◽  
pp. 813-829
Author(s):  
P. Yi-Fa Huang ◽  
N. N. Biswas

abstract This paper describes the characteristics of the Rampart seismic zone by means of the aftershock sequence of the Rampart earthquake (ML = 6.8) which occurred in central Alaska on 29 October 1968. The magnitudes of the aftershocks ranged from about 1.6 to 4.4 which yielded a b value of 0.96 ± 0.09. The locations of the aftershocks outline a NNE-SSW trending aftershock zone about 50 km long which coincides with the offset of the Kaltag fault from the Victoria Creek fault. The rupture zone dips steeply (≈80°) to the west and extends from the surface to a depth of about 10 km. Fault plane solutions for a group of selected aftershocks, which occurred over a period of 22 days after the main shock, show simultaneous occurrences of strike-slip and normal faults. A comparison of the trends in seismicity between the neighboring areas shows that the Rampart seismic zone lies outside the area of underthrusting of the lithospheric plate in southcentral and central Alaska. The seismic zone outlined by the aftershock sequence appears to represent the formation of an intraplate fracture caused by regional northwest compression.


Sign in / Sign up

Export Citation Format

Share Document