scholarly journals Hydro-mechanical effects of seismic events in crystalline rock barriers

2021 ◽  
Vol 1 ◽  
pp. 179-180
Author(s):  
Dominik Kern ◽  
Fabien Magri ◽  
Victor I. Malkovsky ◽  
Thomas Nagel

Abstract. Under ideal conditions, owing to its extremely low matrix permeability, crystalline rock can constitute a suitable hydro-geological barrier. Mechanically, its high strength and stiffness provide advantages when constructing a repository and for long-term stability. However, crystalline rock usually occurs in a fractured form, which can drastically alter hydromechanical (HM) barrier functions due to increased permeability and decreased strength. Seismic events have the potential to alter these HM properties by activating faults, increasing their transmissibility, creating new fractures or altering network connectivity (De Rubeis et al., 2010). Therefore, it is of high importance to build computational models to allow assessment of the HM effects of seismic events in a Deep Geologic Repository (DGR) in crystalline rock, as illustrated in Fig. 1. For this purpose, we consider a DGR in Russia (Yeniseysky site) for high-level waste in crystalline rock (Proterozoic and Archaean gneiss complexes) that is located close to a potentially seismically active area (Jobmann, 2016). Here, we present a coupled HM simulation, using OpenGeoSys (Kolditz et al., 2012), of a large-scale, three-dimensional finite-element model of the Yeniseysky site to assess the consequences of seismically induced stress-field changes on the local stress field and the fluid flow. This research also provides an outlook of current model development geared towards a more detailed assessment of seismically induced hydro-mechanical processes in porous and fractured rocks.

2011 ◽  
Vol 24 (1) ◽  
pp. 45-58 ◽  
Author(s):  
Jiří Žák ◽  
Igor Soejono ◽  
Vojtěch Janoušek ◽  
Zdeněk Venera

AbstractAt Pitt Point, the east coast of Graham Land (Antarctic Peninsula), the Early to Middle Jurassic (Toarcian–Aalenian) rhyolite dykes form two coevally emplaced NNE–SSW and E–W trending sets. The nearly perpendicular dyke sets define a large-scale chocolate-tablet structure, implying biaxial principal extension in the WNW–ESE and N–S directions. Along the nearby north-eastern slope of Mount Reece, the WNW–ESE set locally dominates suggesting variations in the direction and amount of extension. Magnetic fabric in the dykes, revealed using the anisotropy of magnetic susceptibility (AMS) method, indicates dip-parallel to dip-oblique (?upward) magma flow. The dykes are interpreted as representing sub-volcanic feeder zones above a felsic magma source. The dyke emplacement was synchronous with the initial stages of the Weddell Sea opening during Gondwana break-up, but it remains unclear whether it was driven by regional stress field, local stress field above a larger plutonic body, or by an interaction of both.


2011 ◽  
Vol 133 (6) ◽  
Author(s):  
Si Y. Lee ◽  
Steve J. Hensel ◽  
Chris De Bock

The engineering design of disposal of the high level waste (HLW) packages in a geologic repository requires a thermal analysis to forecast the temperature history of the packages. Calculated temperatures are used to demonstrate compliance with criteria for waste acceptance into the geologic disposal gallery system and as input to assess the transient thermal characteristics of the vitrified HLW Package. The objective of the work was to evaluate the thermal performance of the supercontainer containing the vitrified HLW in a nonbackfilled and unventilated underground disposal gallery. In order to achieve the objective, transient computational models for a geologic vitrified HLW package were developed by using a computational fluid dynamics method, and calculations for the HLW disposal gallery of the current Belgian geological repository reference design were performed. An initial simplified two-dimensional model was used to conduct some parametric sensitivity studies to better understand the geologic system’s thermal response. The effect of heat decay, number of codisposed supercontainers, domain size, humidity, thermal conductivity, and thermal emissivity were studied. A more accurate three-dimensional model was also developed by considering the conduction–convection cooling mechanism coupled with radiation, and the effect of the number of supercontainers was studied in more detail, as well as a bounding case with zero heat flux at both ends. The modeling methodology and results of the sensitivity studies will be presented.


Geosciences ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 30
Author(s):  
Scott W. Tyler

The study of the hydrology of arid regions greatly expanded at the end of the 20th century as humans sought to reduce groundwater pollution from landfills, waste dumps and other forms of land disposal. Historically viewed as wastelands where little or no water percolated to the underlying water table, the discovery of large-scale contamination beneath arid disposal sites such as the Hanford nuclear complex in eastern Washington jumpstarted an industry in studying the hydrology of arid vadose zones and their transport behavior. These studies showed that, in spite of hyper aridity in many areas, precipitation often did infiltrate to deep water. The efforts at Yucca Mountain, Nevada to design a high-level nuclear repository stand out as one of the largest of such studies, and one that fundamentally changed our understanding of not only water flow in fractured rocks, but also of the range of our uncertainty of hydrologic processes in arid regions. In this review and commentary, we present some of the initial concepts of flow at Yucca Mountain, and the evolution in research to quantify the concepts. In light of continued stockpiling of high-level waste, and the renewed interest in opening Yucca Mountain for high-level waste, we then focus on the significant surprises and unanswered questions that remained after the end of the characterization and licensing period; questions that continue to demonstrate the challenges of a geologic repository and our uncertainty about critical processes for long-term, safe storage or disposal of some of our most toxic waste products.


2013 ◽  
Vol 10 (11) ◽  
pp. 13191-13229 ◽  
Author(s):  
L. Gudmundsson ◽  
S. I. Seneviratne

Abstract. Large-scale variations of terrestrial water storages and fluxes are key aspects in the Earth system, as they control ecosystem processes, feed back on weather and climate, and form the basis for water resources management. However, relevant observations are limited and process models used for estimation are highly uncertain. These models rely on approximations of terrestrial processes as well as on location-specific parameters (e.g.;soil types, topography) to translate atmospheric forcing (e.g.;precipitation, net radiation) into terrestrial water variables (e.g.;soil moisture, river flow). To date it is unclear which processes and parameters should be included to model terrestrial water systems on regional to global scales. Using a data driven approach we show, that skillful estimates of monthly water dynamics in Europe can be derived from information on atmospheric drivers alone and that the inclusion of land parameters does not improve the estimate. The results highlight that substantial parts of terrestrial water dynamics are controlled by atmospheric forcing, which dominates over land parameters. This is not reflected in current model developments, which are striving at incorporating an increasing number of small scale processes and related parameters. Our results thus point at major potential for theory and model development, with important implications for water resources modelling, seasonal forecasting and climate change projections.


Author(s):  
S. Y. Lee ◽  
S. J. Hensel ◽  
C. De Bock

The engineering design of disposal of the high level waste (HLW) packages in a geologic repository requires a thermal analysis to provide the temperature history of the packages. Calculated temperatures are used to demonstrate compliance with criteria for waste acceptance into the geologic disposal gallery system and as input to assess the transient thermal characteristics of the vitrified HLW Package. The objective of the work was to evaluate the thermal performance of the supercontainer containing the vitrified HLW in a non-backfilled and unventilated underground disposal gallery. In order to achieve the objective, transient computational models for a geologic vitrified HLW package were developed by using a computational fluid dynamics method, and calculations for the HLW disposal gallery of the current Belgian geological repository reference design were performed. An initial two-dimensional model was used to conduct some parametric sensitivity studies to better understand the geologic system’s thermal response. The effect of heat decay, number of co-disposed supercontainers, domain size, humidity, thermal conductivity and thermal emissivity were studied. Later, a more accurate three-dimensional model was developed by considering the conduction-convection cooling mechanism coupled with radiation, and the effect of the number of supercontainers (3, 4 and 8) was studied in more detail, as well as a bounding case with zero heat flux at both ends. The modeling methodology and results of the sensitivity studies will be presented.


Author(s):  
Hong Zhang ◽  
Kumar Anupam ◽  
Athanasios Skarpas ◽  
Cor Kasbergen ◽  
Sandra Erkens

In the Netherlands, more than 80% of the highways are surfaced by porous asphalt (PA) mixes. The benefits of using PA mixes include, among others, the reduction of noise and the improvement of skid resistance. However, pavements with PA mixes are known to have a shorter lifetime and higher maintenance costs as compared with traditional dense asphalt mixes. Raveling is one of the most prominent distresses that occur on PA mix pavements. To analyze the raveling distress of a PA mix pavement, the stress and strain fields at the component level are required. Computational models based on finite element methods (FEM), discrete element methods (DEM), or both, can be used to compute local stress and strain fields. However, they require the development of large FEM meshes and large-scale computational facilities. As an alternative, the homogenization technique provides a way to calculate the stress and strain fields at the component level without the need for much computation power. This study aims to propose a new approach to analyze the raveling distress of a PA mix pavement by using the homogenization technique. To demonstrate the application of the proposed approach, a real field-like example was presented. In the real field-like example, the Mori–Tanaka model was used as a homogenization technique. The commonly available pavement analysis tool 3D-MOVE was used to compute the response of the analyzed pavement. In general, it was concluded that the homogenization technique could be a reliable and effective way to analyze the raveling distress of a PA mix pavement.


2020 ◽  
Vol 27 ◽  
Author(s):  
Zaheer Ullah Khan ◽  
Dechang Pi

Background: S-sulfenylation (S-sulphenylation, or sulfenic acid) proteins, are special kinds of post-translation modification, which plays an important role in various physiological and pathological processes such as cytokine signaling, transcriptional regulation, and apoptosis. Despite these aforementioned significances, and by complementing existing wet methods, several computational models have been developed for sulfenylation cysteine sites prediction. However, the performance of these models was not satisfactory due to inefficient feature schemes, severe imbalance issues, and lack of an intelligent learning engine. Objective: In this study, our motivation is to establish a strong and novel computational predictor for discrimination of sulfenylation and non-sulfenylation sites. Methods: In this study, we report an innovative bioinformatics feature encoding tool, named DeepSSPred, in which, resulting encoded features is obtained via n-segmented hybrid feature, and then the resampling technique called synthetic minority oversampling was employed to cope with the severe imbalance issue between SC-sites (minority class) and non-SC sites (majority class). State of the art 2DConvolutional Neural Network was employed over rigorous 10-fold jackknife cross-validation technique for model validation and authentication. Results: Following the proposed framework, with a strong discrete presentation of feature space, machine learning engine, and unbiased presentation of the underline training data yielded into an excellent model that outperforms with all existing established studies. The proposed approach is 6% higher in terms of MCC from the first best. On an independent dataset, the existing first best study failed to provide sufficient details. The model obtained an increase of 7.5% in accuracy, 1.22% in Sn, 12.91% in Sp and 13.12% in MCC on the training data and12.13% of ACC, 27.25% in Sn, 2.25% in Sp, and 30.37% in MCC on an independent dataset in comparison with 2nd best method. These empirical analyses show the superlative performance of the proposed model over both training and Independent dataset in comparison with existing literature studies. Conclusion : In this research, we have developed a novel sequence-based automated predictor for SC-sites, called DeepSSPred. The empirical simulations outcomes with a training dataset and independent validation dataset have revealed the efficacy of the proposed theoretical model. The good performance of DeepSSPred is due to several reasons, such as novel discriminative feature encoding schemes, SMOTE technique, and careful construction of the prediction model through the tuned 2D-CNN classifier. We believe that our research work will provide a potential insight into a further prediction of S-sulfenylation characteristics and functionalities. Thus, we hope that our developed predictor will significantly helpful for large scale discrimination of unknown SC-sites in particular and designing new pharmaceutical drugs in general.


Cancers ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 2111
Author(s):  
Bo-Wei Zhao ◽  
Zhu-Hong You ◽  
Lun Hu ◽  
Zhen-Hao Guo ◽  
Lei Wang ◽  
...  

Identification of drug-target interactions (DTIs) is a significant step in the drug discovery or repositioning process. Compared with the time-consuming and labor-intensive in vivo experimental methods, the computational models can provide high-quality DTI candidates in an instant. In this study, we propose a novel method called LGDTI to predict DTIs based on large-scale graph representation learning. LGDTI can capture the local and global structural information of the graph. Specifically, the first-order neighbor information of nodes can be aggregated by the graph convolutional network (GCN); on the other hand, the high-order neighbor information of nodes can be learned by the graph embedding method called DeepWalk. Finally, the two kinds of feature are fed into the random forest classifier to train and predict potential DTIs. The results show that our method obtained area under the receiver operating characteristic curve (AUROC) of 0.9455 and area under the precision-recall curve (AUPR) of 0.9491 under 5-fold cross-validation. Moreover, we compare the presented method with some existing state-of-the-art methods. These results imply that LGDTI can efficiently and robustly capture undiscovered DTIs. Moreover, the proposed model is expected to bring new inspiration and provide novel perspectives to relevant researchers.


Author(s):  
Mythili K. ◽  
Manish Narwaria

Quality assessment of audiovisual (AV) signals is important from the perspective of system design, optimization, and management of a modern multimedia communication system. However, automatic prediction of AV quality via the use of computational models remains challenging. In this context, machine learning (ML) appears to be an attractive alternative to the traditional approaches. This is especially when such assessment needs to be made in no-reference (i.e., the original signal is unavailable) fashion. While development of ML-based quality predictors is desirable, we argue that proper assessment and validation of such predictors is also crucial before they can be deployed in practice. To this end, we raise some fundamental questions about the current approach of ML-based model development for AV quality assessment and signal processing for multimedia communication in general. We also identify specific limitations associated with the current validation strategy which have implications on analysis and comparison of ML-based quality predictors. These include a lack of consideration of: (a) data uncertainty, (b) domain knowledge, (c) explicit learning ability of the trained model, and (d) interpretability of the resultant model. Therefore, the primary goal of this article is to shed some light into mentioned factors. Our analysis and proposed recommendations are of particular importance in the light of significant interests in ML methods for multimedia signal processing (specifically in cases where human-labeled data is used), and a lack of discussion of mentioned issues in existing literature.


Sign in / Sign up

Export Citation Format

Share Document