scholarly journals Near-Real-Time Loss Estimates for Future Italian Earthquakes Based on the M6.9 Irpinia Example

Geosciences ◽  
2020 ◽  
Vol 10 (5) ◽  
pp. 165
Author(s):  
Max Wyss ◽  
Philippe Rosset

The number of fatalities and injured was calculated, using the computer code QLARM and its data set and assuming information about the Irpinia 1980 earthquake became available in near-real-time. The casualties calculated for a point source, an approximate line source and a well-defined line source would have become available about 30 min, 60 min and years after the main shock, respectively. The first estimate would have been satisfactory, indicating the seriousness of the disaster. The subsequent loss estimate after 60 min would have defined the human losses accurately, and the ultimate estimate was most accurate. In 2009, QLARM issued a correct estimate of the number of fatalities within 22 min of the M6.3 L’Aquila main shock. These two results show that the number of casualties and injuries in large and major earthquakes in Italy can be estimated correctly within less than an hour by using QLARM.

Author(s):  
D Spallarossa ◽  
M Cattaneo ◽  
D Scafidi ◽  
M Michele ◽  
L Chiaraluce ◽  
...  

Summary The 2016–17 central Italy earthquake sequence began with the first mainshock near the town of Amatrice on August 24 (MW 6.0), and was followed by two subsequent large events near Visso on October 26 (MW 5.9) and Norcia on October 30 (MW 6.5), plus a cluster of 4 events with MW > 5.0 within few hours on January 18, 2017. The affected area had been monitored before the sequence started by the permanent Italian National Seismic Network (RSNC), and was enhanced during the sequence by temporary stations deployed by the National Institute of Geophysics and Volcanology and the British Geological Survey. By the middle of September, there was a dense network of 155 stations, with a mean separation in the epicentral area of 6–10 km, comparable to the most likely earthquake depth range in the region. This network configuration was kept stable for an entire year, producing 2.5 TB of continuous waveform recordings. Here we describe how this data was used to develop a large and comprehensive earthquake catalogue using the Complete Automatic Seismic Processor (CASP) procedure. This procedure detected more than 450,000 events in the year following the first mainshock, and determined their phase arrival times through an advanced picker engine (RSNI-Picker2), producing a set of about 7 million P- and 10 million S-wave arrival times. These were then used to locate the events using a non-linear location (NLL) algorithm, a 1D velocity model calibrated for the area, and station corrections and then to compute their local magnitudes (ML). The procedure was validated by comparison of the derived data for phase picks and earthquake parameters with a handpicked reference catalogue (hereinafter referred to as ‘RefCat’). The automated procedure takes less than 12 hours on an Intel Core-i7 workstation to analyse the primary waveform data and to detect and locate 3000 events on the most seismically active day of the sequence. This proves the concept that the CASP algorithm can provide effectively real-time data for input into daily operational earthquake forecasts, The results show that there have been significant improvements compared to RefCat obtained in the same period using manual phase picks. The number of detected and located events is higher (from 84,401 to 450,000), the magnitude of completeness is lower (from ML 1.4 to 0.6), and also the number of phase picks is greater with an average number of 72 picked arrival for a ML = 1.4 compared with 30 phases for RefCat using manual phase picking. These propagate into formal uncertainties of ± 0.9km in epicentral location and ± 1.5km in depth for the enhanced catalogue for the vast majority of the events. Together, these provide a significant improvement in the resolution of fine structures such as local planar structures and clusters, in particular the identification of shallow events occurring in parts of the crust previously thought to be inactive. The lower completeness magnitude provides a rich data set for development and testing of analysis techniques of seismic sequences evolution, including real-time, operational monitoring of b-value, time-dependent hazard evaluation and aftershock forecasting.


2021 ◽  
Vol 9 (1) ◽  
pp. 232596712097399
Author(s):  
Markus Geßlein ◽  
Johannes Rüther ◽  
Michael Millrose ◽  
Hermann Josef Bail ◽  
Robin Martin ◽  
...  

Background: Hand and wrist injuries are a common but underestimated issue in taekwondo. Detailed data on injury risk, patterns, and mechanism are missing. Purpose: To evaluate (1) the fight time exposure-adjusted injury incidence rate (IIR) and clinical incidence and (2) injury site, type, sport-specific mechanism, and time loss in taekwondo. Study Design: Descriptive epidemiology study. Methods: Athletes from a single national Olympic taekwondo training center were investigated prospectively for hand and wrist injuries during training and competition over 5 years. The Orchard Sports Injury Classification System Version 10 was used to classify injury type, and analysis of the anatomic injury site was performed. The mechanism of injury was classified as due to either striking or blocking techniques. Results: From a total of 107 athletes, 79 athletes (73.8%) with a total exposure time of 8495 hours were included in the final data set. During the study period, 75 injuries of the hand and wrist region were recorded despite the athletes using protective hand gear. The IIR was 13.9 (95% CI, 10.5-17.5) and was significantly higher during competition. The clinical incidence as an indicator for risk of injury was 60.7% (95% CI, 50.9-70.5). Finger rays were the most affected location (68%), and fractures (43%) and joint ligament injuries (35%) were the most common type of injury. Significantly more injuries were found on the dominant hand side ( P < .001). Comparison of injury mechanisms demonstrated significantly more injuries at the finger rays deriving from blocking techniques ( P = .0104). The mean time loss for all hand and wrist injuries was 15.7 ± 13.5 days (range, 3-45 days) and was highest for distal radial fractures, with a mean of 39.7 ± 4.8 days (range, 32-45 days). Conclusion: There was a significantly higher IIR for acute hand and wrist injuries in elite taekwondo athletes during competition, which resulted in considerable time loss, especially when fractures or dislocations occurred. Significantly more injuries to the finger rays were found during blocking despite the use of protective hand gear. Improvement of tactical skills and blocking techniques during training and improved protective gear appear to be essential for injury prevention.


2021 ◽  
pp. 1-11
Author(s):  
Tingting Zhao ◽  
Xiaoli Yi ◽  
Zhiyong Zeng ◽  
Tao Feng

YTNR (Yunnan Tongbiguan Nature Reserve) is located in the westernmost part of China’s tropical regions and is the only area in China with the tropical biota of the Irrawaddy River system. The reserve has abundant tropical flora and fauna resources. In order to realize the real-time detection of wild animals in this area, this paper proposes an improved YOLO (You only look once) network. The original YOLO model can achieve higher detection accuracy, but due to the complex model structure, it cannot achieve a faster detection speed on the CPU detection platform. Therefore, the lightweight network MobileNet is introduced to replace the backbone feature extraction network in YOLO, which realizes real-time detection on the CPU platform. In response to the difficulty in collecting wild animal image data, the research team deployed 50 high-definition cameras in the study area and conducted continuous observations for more than 1,000 hours. In the end, this research uses 1410 images of wildlife collected in the field and 1577 wildlife images from the internet to construct a research data set combined with the manual annotation of domain experts. At the same time, transfer learning is introduced to solve the problem of insufficient training data and the network is difficult to fit. The experimental results show that our model trained on a training set containing 2419 animal images has a mean average precision of 93.6% and an FPS (Frame Per Second) of 3.8 under the CPU. Compared with YOLO, the mean average precision is increased by 7.7%, and the FPS value is increased by 3.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1421
Author(s):  
Chih-Chiang Wei ◽  
Chen-Chia Hsu

This study developed a real-time rainfall forecasting system that can predict rainfall in a particular area a few hours before a typhoon’s arrival. The reflectivity of nine elevation angles obtained from the volume coverage pattern 21 Doppler radar scanning strategy and ground-weather data of a specific area were used for accurate rainfall prediction. During rainfall prediction and analysis, rainfall retrievals were first performed to select the optimal radar scanning elevation angle for rainfall prediction at the current time. Subsequently, forecasting models were established using a single reflectivity and all elevation angles (10 prediction submodels in total) to jointly predict real-time rainfall and determine the optimal predicted values. This study was conducted in southeastern Taiwan and included three onshore weather stations (Chenggong, Taitung, and Dawu) and one offshore weather station (Lanyu). Radar reflectivities were collected from Hualien weather surveillance radar. The data for a total of 14 typhoons that affected the study area in 2008–2017 were collected. The gated recurrent unit (GRU) neural network was used to establish the forecasting model, and extreme gradient boosting and multiple linear regression were used as the benchmarks. Typhoons Nepartak, Meranti, and Megi were selected for simulation. The results revealed that the input data set merged with weather-station data, and radar reflectivity at the optimal elevation angle yielded optimal results for short-term rainfall forecasting. Moreover, the GRU neural network can obtain accurate predictions 1, 3, and 6 h before typhoon occurrence.


2021 ◽  
Author(s):  
Ahmed Al-Sabaa ◽  
Hany Gamal ◽  
Salaheldin Elkatatny

Abstract The formation porosity of drilled rock is an important parameter that determines the formation storage capacity. The common industrial technique for rock porosity acquisition is through the downhole logging tool. Usually logging while drilling, or wireline porosity logging provides a complete porosity log for the section of interest, however, the operational constraints for the logging tool might preclude the logging job, in addition to the job cost. The objective of this study is to provide an intelligent prediction model to predict the porosity from the drilling parameters. Artificial neural network (ANN) is a tool of artificial intelligence (AI) and it was employed in this study to build the porosity prediction model based on the drilling parameters as the weight on bit (WOB), drill string rotating-speed (RS), drilling torque (T), stand-pipe pressure (SPP), mud pumping rate (Q). The novel contribution of this study is to provide a rock porosity model for complex lithology formations using drilling parameters in real-time. The model was built using 2,700 data points from well (A) with 74:26 training to testing ratio. Many sensitivity analyses were performed to optimize the ANN model. The model was validated using unseen data set (1,000 data points) of Well (B), which is located in the same field and drilled across the same complex lithology. The results showed the high performance for the model either for training and testing or validation processes. The overall accuracy for the model was determined in terms of correlation coefficient (R) and average absolute percentage error (AAPE). Overall, R was higher than 0.91 and AAPE was less than 6.1 % for the model building and validation. Predicting the rock porosity while drilling in real-time will save the logging cost, and besides, will provide a guide for the formation storage capacity and interpretation analysis.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2021 ◽  
Author(s):  
Temirlan Zhekenov ◽  
Artem Nechaev ◽  
Kamilla Chettykbayeva ◽  
Alexey Zinovyev ◽  
German Sardarov ◽  
...  

SUMMARY Researchers base their analysis on basic drilling parameters obtained during mud logging and demonstrate impressive results. However, due to limitations imposed by data quality often present during drilling, those solutions often tend to lose their stability and high levels of predictivity. In this work, the concept of hybrid modeling was introduced which allows to integrate the analytical correlations with algorithms of machine learning for obtaining stable solutions consistent from one data set to another.


1989 ◽  
Vol 79 (2) ◽  
pp. 493-499
Author(s):  
Stuart A. Sipkin

Abstract The teleseismic long-period waveforms recorded by the Global Digital Seismograph Network from the two largest Superstition Hills earthquakes are inverted using an algorithm based on optimal filter theory. These solutions differ slightly from those published in the Preliminary Determination of Epicenters Monthly Listing because a somewhat different, improved data set was used in the inversions and a time-dependent moment-tensor algorithm was used to investigate the complexity of the main shock. The foreshock (origin time 01:54:14.5, mb 5.7, Ms 6.2) had a scalar moment of 2.3 × 1025 dyne-cm, a depth of 8 km, and a mechanism of strike 217°, dip 79°, rake 4°. The main shock (origin time 13:15:56.4, mb 6.0, Ms 6.6) was a complex event, consisting of at least two subevents, with a combined scalar moment of 1.0 × 1026 dyne-cm, a depth of 10 km, and a mechanism of strike 303°, dip 89°, rake −180°.


Sign in / Sign up

Export Citation Format

Share Document