Probabilistic assessment of tephra fall hazards in Japan using a tephra fall distribution database

2020 ◽  
Author(s):  
Shimpei Uesawa ◽  
Kiyoshi Toshida ◽  
Shingo Takeuchi ◽  
Daisuke Miura

Abstract Tephra falls can disrupt critical infrastructure, including transportation and electricity networks. Probabilistic assessments of tephra fall hazards have been performed using computational techniques, but it is also important to integrate long-term, regional geological records. To assess tephra fall load hazards in Japan, we re-digitized an existing database of 551 tephra distribution maps. We used the re-digitized datasets to produce hazard curves for a range of tephra loads for various localities. We calculated annual exceedance probabilities (AEPs) and constructed hazard curves from the most complete part of the geological record. We used records of tephra fall events with a Volcanic Explosivity Index (VEI) of 4–7 (based on survivor functions) that occurred over the last 150 ka, as the database contains a very high percentage (around 90%) of VEI 4–7 events for this period. We fitted the data for this period using a Poisson distribution function. Hazard curves were constructed for the tephra fall load at 47 prefectural offices throughout Japan, and four broad regions were defined (NE–W, NE–E, W, and SW Japan). AEPs were relatively high, exceeding 1 × 10 −4 for loads greater than 0 kg/m 2 on the eastern (down-wind) side of the volcanic front in the NE–E region. In much of the W and SW regions, maximum loads were heavier, but AEPs were lower (<10 −4 ). Tephras from large (VEI ≥ 6) events are the predominant hazard in every region. A parametric analysis was applied to investigate regional variability using AEP diagrams and slope shape parameters via curve fitting with exponential and double-exponential decay functions. Two major differences were recognized between the hazard curves from borehole data and those from the digitized tephra database. The first is a significant underestimation of AEP for frequent events using the tephra database, by one to two orders of magnitude. This is explained in terms of the lack of records for smaller tephra fall events in the database. The second is an overestimation of the heaviest tephra load events, which differ by a factor of two to three. This difference might be due to the tephra fall distribution contour interpolation methodology used to generate the original database. The hazard curve for Tokyo developed in this study differs from those that have been generated previously using computational techniques. For the Tokyo region, the probabilities and tephra loads produced by computational methods are at least one order of magnitude greater than those generated during the present study. These discrepancies are inferred to have been caused by initial parameter settings in the computational simulations, including their incorporation of large-scale eruptions of up to VEI = 7 for all large stratovolcanoes, regardless of their eruptive histories. To improve the precision of the digital database, we plan to incorporate recent (since 2003) tephra distributions, revise questionable isopach maps, and develop an improved interpolation method for digitizing tephra fall distributions.

2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1009
Author(s):  
Ilaria De Santis ◽  
Michele Zanoni ◽  
Chiara Arienti ◽  
Alessandro Bevilacqua ◽  
Anna Tesei

Subcellular spatial location is an essential descriptor of molecules biological function. Presently, super-resolution microscopy techniques enable quantification of subcellular objects distribution in fluorescence images, but they rely on instrumentation, tools and expertise not constituting a default for most of laboratories. We propose a method that allows resolving subcellular structures location by reinforcing each single pixel position with the information from surroundings. Although designed for entry-level laboratory equipment with common resolution powers, our method is independent from imaging device resolution, and thus can benefit also super-resolution microscopy. The approach permits to generate density distribution maps (DDMs) informative of both objects’ absolute location and self-relative displacement, thus practically reducing location uncertainty and increasing the accuracy of signal mapping. This work proves the capability of the DDMs to: (a) improve the informativeness of spatial distributions; (b) empower subcellular molecules distributions analysis; (c) extend their applicability beyond mere spatial object mapping. Finally, the possibility of enhancing or even disclosing latent distributions can concretely speed-up routine, large-scale and follow-up experiments, besides representing a benefit for all spatial distribution studies, independently of the image acquisition resolution. DDMaker, a Software endowed with a user-friendly Graphical User Interface (GUI), is also provided to support users in DDMs creation.


Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3586 ◽  
Author(s):  
Sizhou Sun ◽  
Jingqi Fu ◽  
Ang Li

Given the large-scale exploitation and utilization of wind power, the problems caused by the high stochastic and random characteristics of wind speed make researchers develop more reliable and precise wind power forecasting (WPF) models. To obtain better predicting accuracy, this study proposes a novel compound WPF strategy by optimal integration of four base forecasting engines. In the forecasting process, density-based spatial clustering of applications with noise (DBSCAN) is firstly employed to identify meaningful information and discard the abnormal wind power data. To eliminate the adverse influence of the missing data on the forecasting accuracy, Lagrange interpolation method is developed to get the corrected values of the missing points. Then, the two-stage decomposition (TSD) method including ensemble empirical mode decomposition (EEMD) and wavelet transform (WT) is utilized to preprocess the wind power data. In the decomposition process, the empirical wind power data are disassembled into different intrinsic mode functions (IMFs) and one residual (Res) by EEMD, and the highest frequent time series IMF1 is further broken into different components by WT. After determination of the input matrix by a partial autocorrelation function (PACF) and normalization into [0, 1], these decomposed components are used as the input variables of all the base forecasting engines, including least square support vector machine (LSSVM), wavelet neural networks (WNN), extreme learning machine (ELM) and autoregressive integrated moving average (ARIMA), to make the multistep WPF. To avoid local optima and improve the forecasting performance, the parameters in LSSVM, ELM, and WNN are tuned by backtracking search algorithm (BSA). On this basis, BSA algorithm is also employed to optimize the weighted coefficients of the individual forecasting results that produced by the four base forecasting engines to generate an ensemble of the forecasts. In the end, case studies for a certain wind farm in China are carried out to assess the proposed forecasting strategy.


2016 ◽  
Vol 46 (12) ◽  
pp. 3751-3775 ◽  
Author(s):  
Olivier Arzel ◽  
Alain Colin de Verdière

AbstractThe turbulent diapycnal mixing in the ocean is currently obtained from microstructure and finestructure measurements, dye experiments, and inverse models. This study presents a new method that infers the diapycnal mixing from low-resolution numerical calculations of the World Ocean whose temperatures and salinities are restored to the climatology. At the difference of robust general circulation ocean models, diapycnal diffusion is not prescribed but inferred. At steady state the buoyancy equation shows an equilibrium between the large-scale diapycnal advection and the restoring terms that take the place of the divergence of eddy buoyancy fluxes. The geography of the diapycnal flow reveals a strong regional variability of water mass transformations. Positive values of the diapycnal flow indicate an erosion of a deep-water mass and negative values indicate a creation. When the diapycnal flow is upward, a diffusion law can be fitted in the vertical and the diapycnal eddy diffusivity is obtained throughout the water column. The basin averages of diapycnal diffusivities are small in the first 1500 m [O(10−5) m2 s−1] and increase downward with bottom values of about 2.5 × 10−4 m2 s−1 in all ocean basins, with the exception of the Southern Ocean (50°–30°S), where they reach 12 × 10−4 m2 s−1. This study confirms the small diffusivity in the thermocline and the robustness of the higher canonical Munk’s value in the abyssal ocean. It indicates that the upward dianeutral transport in the Atlantic mostly takes place in the abyss and the upper ocean, supporting the quasi-adiabatic character of the middepth overturning.


Author(s):  
David Mendonça ◽  
William A. Wallace ◽  
Barbara Cutler ◽  
James Brooks

AbstractLarge-scale disasters can produce profound disruptions in the fabric of interdependent critical infrastructure systems such as water, telecommunications and electric power. The work of post-disaster infrastructure restoration typically requires information sharing and close collaboration across these sectors; yet – due to a number of factors – the means to investigate decision making phenomena associated with these activities are limited. This paper motivates and describes the design and implementation of a computer-based synthetic environment for investigating collaborative information seeking in the performance of a (simulated) infrastructure restoration task. The main contributions of this work are twofold. First, it develops a set of theoretically grounded measures of collaborative information seeking processes and embeds them within a computer-based system. Second, it suggests how these data may be organized and modeled to yield insights into information seeking processes in the performance of a complex, collaborative task. The paper concludes with a discussion of implications of this work for practice and for future research.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-31
Author(s):  
Haida Zhang ◽  
Zengfeng Huang ◽  
Xuemin Lin ◽  
Zhe Lin ◽  
Wenjie Zhang ◽  
...  

Driven by many real applications, we study the problem of seeded graph matching. Given two graphs and , and a small set of pre-matched node pairs where and , the problem is to identify a matching between and growing from , such that each pair in the matching corresponds to the same underlying entity. Recent studies on efficient and effective seeded graph matching have drawn a great deal of attention and many popular methods are largely based on exploring the similarity between local structures to identify matching pairs. While these recent techniques work provably well on random graphs, their accuracy is low over many real networks. In this work, we propose to utilize higher-order neighboring information to improve the matching accuracy and efficiency. As a result, a new framework of seeded graph matching is proposed, which employs Personalized PageRank (PPR) to quantify the matching score of each node pair. To further boost the matching accuracy, we propose a novel postponing strategy, which postpones the selection of pairs that have competitors with similar matching scores. We show that the postpone strategy indeed significantly improves the matching accuracy. To improve the scalability of matching large graphs, we also propose efficient approximation techniques based on algorithms for computing PPR heavy hitters. Our comprehensive experimental studies on large-scale real datasets demonstrate that, compared with state-of-the-art approaches, our framework not only increases the precision and recall both by a significant margin but also achieves speed-up up to more than one order of magnitude.


Author(s):  
F. Ma ◽  
J. H. Hwang

Abstract In analyzing a nonclassically damped linear system, one common procedure is to neglect those damping terms which are nonclassical, and retain the classical ones. This approach is termed the method of approximate decoupling. For large-scale systems, the computational effort at adopting approximate decoupling is at least an order of magnitude smaller than the method of complex modes. In this paper, the error introduced by approximate decoupling is evaluated. A tight error bound, which can be computed with relative ease, is given for this method of approximate solution. The role that modal coupling plays in the control of error is clarified. If the normalized damping matrix is strongly diagonally dominant, it is shown that adequate frequency separation is not necessary to ensure small errors.


2021 ◽  
Author(s):  
Kazuki Murata ◽  
Shinji Sassa ◽  
Tomohiro Takagawa ◽  
Toshikazu Ebisuzaki ◽  
Shigenori Maruyama

Abstract We first propose and examine a method for digitizing analog data of submarine topography by focusing on the seafloor survey records available in the literature to facilitate a detailed analysis of submarine landslides and landslide-induced tsunamis. Second, we apply this digitization method to the seafloor topographic changes recorded before and after the 1923 Great Kanto earthquake tsunami event and evaluate its effectiveness. Third, we discuss the coseismic large-scale seafloor deformation at the Sagami Bay and the mouth of the Tokyo Bay, Japan. The results confirmed that the latitude / longitude and water depth values recorded by the lead sounding measurement method can be approximately extracted from the sea depth coordinates by triangulation survey through the overlaying of the currently available GIS map data without geometric correction such as affine transformation. Further, this proposed method allows us to obtain mesh data of depth changes in the sea area by using the interpolation method based on the IDW (Inverse Distance Weighted) average method through its application to the case of the 1923 Great Kanto Earthquake. Finally, we analyzed and compared the submarine topography before and after the 1923 tsunami event and the current seabed topography. Consequently, we found that these large-scale depth changes correspond to the valley lines that flow down as the topography of the Sagami Bay and the Tokyo Bay mouth.


Author(s):  
Paolo Bergamo ◽  
Conny Hammer ◽  
Donat Fäh

ABSTRACT We address the relation between seismic local amplification and topographical and geological indicators describing the site morphology. We focus on parameters that can be derived from layers of diffuse information (e.g., digital elevation models, geological maps) and do not require in situ surveys; we term these parameters as “indirect” proxies, as opposed to “direct” indicators (e.g., f0, VS30) derived from field measurements. We first compiled an extensive database of indirect parameters covering 142 and 637 instrumented sites in Switzerland and Japan, respectively; we collected topographical indicators at various spatial extents and focused on shared features in the geological descriptions of the two countries. We paired this proxy database with a companion dataset of site amplification factors at 10 frequencies within 0.5–20 Hz, empirically measured at the same Swiss and Japanese stations. We then assessed the robustness of the correlation between individual site-condition indicators and local response by means of statistical analyses; we also compared the proxy-site amplification relations at Swiss versus Japanese sites. Finally, we tested the prediction of site amplification by feeding ensembles of indirect parameters to a neural network (NN) structure. The main results are: (1) indirect indicators show higher correlation with site amplification in the low-frequency range (0.5–3.33 Hz); (2) topographical parameters primarily relate to local response not because of topographical amplification effects but because topographical features correspond to the properties of the subsurface, hence to stratigraphic amplification; (3) large-scale topographical indicators relate to low-frequency response, smaller-scale to higher-frequency response; (4) site amplification versus indirect proxy relations show a more marked regional variability when compared with direct indicators; and (5) the NN-based prediction of site response is the best achieved in the 1.67–5 Hz band, with both geological and topographical proxies provided as input; topographical indicators alone perform better than geological parameters.


2019 ◽  
Author(s):  
Harry Minas

Abstract Objective: There has been increased attention in recent years to mental health, quality of life, stress and academic performance among university students, and the possible influence of learning styles. Brief reliable questionnaires are useful in large-scale multivariate research designs, such as the largely survey-based research on well-being and academic performance of university students. The objective of this study was to examine the psychometric properties of a briefer version of the 39-item Adelaide Diagnostic Learning Inventory. Results: In two survey samples - medical and physiotherapy students - a 21-item version Adelaide Diagnostic Learning Inventory - Brief (ADLIB) was shown to have the same factor structure as the parent instrument, and the factor structure of the brief instrument was found to generalise across students of medicine and physiotherapy. Sub-scale reliability estimations were in the order of magnitude of the parent instrument. Sub-scale inter-correlations, inter-factor congruence coefficients, and correlations between ADLIB sub-scale scores and several external measures provide support support for the construct and criterion validity of the instrument.


Sign in / Sign up

Export Citation Format

Share Document