scholarly journals Retrospective short-term forecasting experiment in Italy based on the occurrence of strong (fore) shocks

Author(s):  
P Gasperini ◽  
E Biondini ◽  
B Lolli ◽  
A Petruccelli ◽  
G Vannucci

Summary In a recent work we computed the relative frequencies with which strong shocks (4.0 ≤ Mw < 5.0), widely felt by the population were followed in the same area by potentially destructive main shocks (Mw ≥ 5.0) in Italy. Assuming the stationarity of the seismic release properties, such frequencies can be tentatively used to estimate the probabilities of potentially destructive shocks after the occurrence of future strong shocks. This allows us to set up an alarm-based forecasting hypothesis related to strong foreshocks occurrence. Such hypothesis is tested retrospectively on the data of a homogenized seismic catalogue of the Italian area against a purely random hypothesis that simply forecasts the target main shocks proportionally to the space-time fraction occupied by the alarms. We compute the latter fraction in two ways a) as the ratio between the average time covered by the alarms in each area and the total duration of the forecasting experiment (60 years) and b) as the same ratio but weighted by the past frequency of occurrence of earthquakes in each area. In both cases the overall retrospective performance of our forecasting algorithm is definitely better than the random case. Considering an alarm duration of three months, the algorithm retrospectively forecasts more than 70 per cent of all shocks with Mw ≥ 5.5 occurred in Italy from 1960 to 2019 with a total space-time fraction covered by the alarms of the order of 2 per cent. Considering the same space-time coverage, the algorithm is also able to retrospectively forecasts more than 40 per cent of the first main shocks with Mw ≥ 5.5 of the seismic sequences occurred in the same time interval. Given the good reliability of our results, the forecasting algorithm is set and ready to be tested also prospectively, in parallel to other ongoing procedures operating on the Italian territory.

2021 ◽  
Author(s):  
Luca Carbone ◽  
Rita de Nardis ◽  
Giusy Lavecchia ◽  
Laura Peruzza ◽  
Enrico Priolo ◽  
...  

<p> </p><p>During the seismic sequence which followed the devastating L’Aquila 2009 earthquake, on 27 May 2009 the OGS (Istituto Nazionale di Oceanografia e di Geofisica Sperimentale) and the GeosisLab (Laboratorio di Geodinamica e Sismogenesi, Chieti-Pescara University) installed a temporary seismometric network around the Sulmona Basin, a high seismic risk area of Central Italy located right at SE of the epicentral one. This area of the central Apennines is generally characterized by low level seismicity organized in low energy clusters, but it experienced destructive earthquakes both in historical and in early instrumental time (Fucino 1915 =XI MCS, Majella 1706 =X-XI MCS, Barrea 1984 =VIII MCS).</p><p>From the 27 May 2009 to 22 November 2011, the temporary network provided a huge amount of continuous seismic recordings, and a seismic catalogue covering the first seven months of network operation (-1.5≤M<sub>L</sub>≤3.7, with a completeness magnitude of 1.1) and a spatial area that stretches from the Sulmona Basin to Marsica-Sora area. Aiming to enhance the detection of microearthquakes reported in this catalogue, we applied the matched-filter technique (MFT) to continuous waveforms properly integrated with data from permanent stations belonging to the national seismic network. Specifically, we used the open-source seismological package PyMPA to detect microseismicity from the cross-correlation of continuous data and templates. As templates we used only the best relocated events of the available seismic catalogue. Starting from 366 well located earthquakes<strong> </strong>we obtain a new seismic catalogue of 6084 new events (-2<M<sub>L</sub><4) lowering the completeness magnitude to 0.2. To these new seismic locations, we applied a declustering method to separate background seismicity from clustered seismicity in the area. All the seismicity shows a bimodal behaviour in term of distribution of the nearest-neighbor distance/time with the presence of two statistically distinct earthquake populations. We focused the attention on two of these clusters (C1 and C2) that numerically represent the 60% of the catalogue. They consist in 2619 and 995 events, respectively, with magnitude -2.0<M<sub>L</sub><3.6 and -0.5<M<sub>L</sub><3.2 occurred in Marsica-Sora area. C1 shows the typical characteristics of a seismic swarm, without a clear mainshock, but with 8 more energetic events (3.0≤M<sub>L</sub>≤3.5); the temporal evolution is very articulated with a total duration of one month with different bursts of seismicity and characteristic time extension of approximately one week. C2 instead has a different space-time evolution and consists of different swarm-like seismic sequences more discontinuous in comparison with C1. These swarms are described in greater detail to investigate the influence of overpressurized fluids and their space-time distribution.</p>


1994 ◽  
Vol 161 ◽  
pp. 365-366
Author(s):  
M.L. Hazen

The Harvard College Observatory plate collection is unique in the world in several aspects. First, it is the largest, with a total of approximately half a million plates. Second, because Harvard set up stations very early in the southern hemisphere, the collection covers the entire sky. In fact, the southern hemisphere coverage is slightly better than the northern. Third, the collection is unique in the length of the time interval over which the plates were taken. The first plates were acquired in the northern hemisphere in 1885, and in the southern hemisphere in 1891. There is a substantial gap in the late 1950s and 1960s, but patrol plates were then taken up to 1989. Fourth, the collection contains a very large number of images of a given object. For B = 15 mag or brighter, from several hundred to a thousand or more images can be found; for B = 17.5 mag, one can locate from a few to several hundred images.


2019 ◽  
Author(s):  
Vitaly Kuyukov
Keyword(s):  

the holographic space-time interval leads to the concept of dualism between loops and strings.


1989 ◽  
Vol 54 (7) ◽  
pp. 1785-1794 ◽  
Author(s):  
Vlastimil Kubáň ◽  
Josef Komárek ◽  
Zbyněk Zdráhal

A FIA-FAAS apparatus containing a six-channel sorption equipment with five 3 x 26 mm microcolumns packed with Spheron Oxin 1 000, Ostsorb Oxin and Ostsorb DTTA was set up. Combined with sorption from 0.002M acetate buffer at pH 4.2 and desorption with 2M-HCl, copper can be determined at concentrations up to 100, 150 and 200 μg l-1, respectively. For sample and eluent flow rates of 5.0 and 4.0 ml min-1, respectively, and a sample injection time of 5 min, the limit of copper determination is LQ = 0.3 μg l-1, repeatability sr is better than 2% and recovery is R = 100 ± 2%. The enrichment factor is on the order of 102 and is a linear function of time (volume) of sample injection up to 5 min and of the sample injection flow rate up to 11 ml min-1 for Spheron Oxin 1 000 and Ostsorb DTTA. For times of sorption of 60 and 300 s, the sampling frequency is 70 and 35 samples/h, respectively. The parameters of the FIA-FAAS determination (acetylene-air flame) are comparable to or better than those achieved by ETA AAS. The method was applied to the determination of traces of copper in high-purity water.


2021 ◽  
Vol 58 (1) ◽  
pp. 42-67 ◽  
Author(s):  
Mads Stehr ◽  
Anders Rønn-Nielsen

AbstractWe consider a space-time random field on ${{\mathbb{R}^d} \times {\mathbb{R}}}$ given as an integral of a kernel function with respect to a Lévy basis with a convolution equivalent Lévy measure. The field obeys causality in time and is thereby not continuous along the time axis. For a large class of such random fields we study the tail behaviour of certain functionals of the field. It turns out that the tail is asymptotically equivalent to the right tail of the underlying Lévy measure. Particular examples are the asymptotic probability that there is a time point and a rotation of a spatial object with fixed radius, in which the field exceeds the level x, and that there is a time interval and a rotation of a spatial object with fixed radius, in which the average of the field exceeds the level x.


1988 ◽  
Vol 10 (1) ◽  
pp. 37-42 ◽  
Author(s):  
M. J. Wheeler ◽  
Linzi Waiters

The Kemtek 1000 Sample Processor has been evaluated for precision, accuracy, speed and reliability. Precision was better than 1.0% at all volumes tested and accuracy within ±5%. A l00-tube assay could be set up within 15 min when patient specimens plus two reagents were sampled using a two probe system. Carry-over could be reduced to <0.01% by using a sufficient number of wash steps, the latter being related to the assay requirements. Evidence was found for adsorption of protein to the probe tubing but inaccuracies due to this could be reduced by introducing wash steps between samples. Problems over 12 months have been minor and quickly resolved. The authors were pleased with the way the processor performed and their staffhave confidence in leaving it to set up their assays.


2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2021 ◽  
Vol 18 (3) ◽  
pp. 271-289
Author(s):  
Evgeniia Bulycheva ◽  
Sergey Yanchenko

Harmonic contributions of utility and customer may feature significant variations due to network switchings and changing operational modes. In order to correctly define the impacts on the grid voltage distortion the frequency dependent impedance characteristic of the studied network should be accurately measured in the real-time mode. This condition can be fulfilled by designing a stimuli generator measuring the grid impedance as a response to injected interference and producing time-frequency plots of harmonic contributions during considered time interval. In this paper a prototype of a stimuli generator based on programmable voltage source inverter is developed and tested. The use of ternary pulse sequence allows fast wide-band impedance measurements that meet the requirements of real-time assessment of harmonic contributions. The accuracy of respective analysis involving impedance determination and calculation of harmonic contributions is validated experimentally using reference characteristics of laboratory test set-up with varying grid impedance.


Author(s):  
Ali Najim Abdullah ◽  
Ahmed Majeed Ghadhban ◽  
Hayder Salim Hameed ◽  
Husham Idan Hussein

<p><span>This paper proposes a steady-state of the Static Var Compensator (SVC) &amp; Thyristor Controlled Series Capacitor (TCSC) set up for enhancing the damping overall performance and growing the integral clearing time (CCT) of a power network. The indispensable clearing time is carried out through increasing the time fault interval until the gadget loses stability. Increasing the CCT can be contribute to reliability of the safety gadget, decrease the protection machine ranking and cost. In order to attain most enhancement of machine stability via optimizing location, sizing and control modes of SVC and TCSC. Models and methodology for putting and designing shunt FACT’s units SVC (injected reactive strength Q) and series FACT’s devices TCSC (chose capacitive region) are examined in a 6-bus system. Performance factors are described to show validation of SVC and TCSC on extraordinary conditions. It is proven that the SVC is better than TCSC. </span></p>


Author(s):  
Masood Dehghani

Introduction: The only option for treatment of end stage liver diseases is liver transplantation. Afzalipour Hospital in Kerman, Iran is the third largest liver transplantation center in Iran. In this study, the outcomes of this center have been studied during the past 5 years. Methods: In this cross-sectional study, the pre and post transplantation’s clinical, demographic and outcome data of all patients who received liver transplant at Afzalipour Hospital during the past 5 years have been collected and reviewed. SPSS software ver. 16 was used to analyze the data. Results: Forty-three patients have received liver transplantation during this time interval. The 3-year survival rate of patients was 77%. The most common cause of death was primary nonfunction graft after transplantation. The most common complication was acute rejection (15%), all of which were successfully treated with corticosteroids. Conclusion:  Due to increment of cases of acute and chronic liver failure in the community and since the final treatment of these cases is liver transplantation, so there is need to develop liver transplant centers in the future. Quantitative and qualitative study of the activity of centers based liver transplant in Iran is necessary to set up successful centers.


Sign in / Sign up

Export Citation Format

Share Document