scholarly journals Gobi Altai, Khangai and Khentii Mountains mapped by a mixed-method cartographic approach for comparative geophysical analysis

2021 ◽  
Vol 26 (52) ◽  
pp. 62-79
Author(s):  
Polina Lemenkova

Geologic and geophysical mapping has been so far limited to the traditional single-method GIS-based mapping. A new approach combining integrated analysis of data on geology, gravity, topography and geomorphology is presented for regional characterization of the geophysical setting in Mongolia: the Gobi Altai Mountains, the Khangai Mountains and Khentii Mountains with surrounding areas. Nine new maps have been produced from the high-resolution datasets: GEBCO, gravity raster, USGS geological data and SRTM-90 DEM geomorphological grid. Methodology includes three tools for cartographic data visualization: i) Generic Mapping Tools (GMT), ii) R programming language (‘raster’ and ‘tmap’ libraries); iii) QGIS. The results demonstrated strong agreement between the estimated values in gravity and topography grids, distribution of geological units and provinces over the country and geomorphological landforms with respect to the mountain ranges: Altai, Khangai and Khentii Mountains. The highest values in the gravity anomalies correspond to the mountain ranges in the Altai Mountains and Khangai Mountains (<80 mGal); high values correspond to the Khentii Mountains (20–60 mGal). Contrariwise, the basins of the Uvs Nuur and Khyargas Nuur show negative values (<-80 mGal). The NE- to NNE-oriented faulting and rift basins are clearly visible in the geophysical grids and geologic maps. The geomorphometric analysis performed based on the SRTM-90 DEM using R scripting demonstrated (1) slope, (2) aspect, (3) hillshade and (4) elevation models of Mongolia supported by histograms of data distribution and frequency. The study contributed to the cartographic methods and regional geological studies of Mongolia.

2020 ◽  
Author(s):  
Yangtai Liu ◽  
Xiang Wang ◽  
Baolin Liu ◽  
Qingli Dong

AbstractMicrorisk Lab was designed as an interactive modeling freeware to realize parameter estimation and model simulation in predictive microbiology. This tool was developed based on the R programming language and ‘Shinyapps.io’ server, and designed as a fully responsive interface to the internet-connected devices. A total of 36 peer-reviewed models were integrated for parameter estimation (including primary models of bacterial growth/ inactivation under static and non-isothermal conditions, secondary models of specific growth rate, and competition models of two-flora growth) and model simulation (including integrated models of deterministic or stochastic bacterial growth/ inactivation under static and non-isothermal conditions) in Microrisk Lab. Each modeling section was designed to provide numerical and graphical results with comprehensive statistical indicators depending on the appropriate dataset and/ or parameter setting. In this research, six case studies were reproduced in Microrisk Lab and compared in parallel to DMFit, GInaFiT, IPMP 2013/ GraphPad Prism, Bioinactivation FE, and @Risk, respectively. The estimated and simulated results demonstrated that the performance of Microrisk Lab was statistically equivalent to that of other existing modeling system in most cases. Microrisk Lab allowed for uniform user experience to implement microbial predictive modeling by its friendly interfaces, high-integration, and interconnectivity. It might become a useful tool for the microbial parameter determination and behavior simulation. Non-commercial users could freely access this application at https://microrisklab.shinyapps.io/english/.


Author(s):  
Ramin Nabizadeh ◽  
Mostafa Hadei

Introduction: The wide range of studies on air pollution requires accurate and reliable datasets. However, due to many reasons, the measured concentra-tions may be incomplete or biased. The development of an easy-to-use and reproducible exposure assessment method is required for researchers. There-fore, in this article, we describe and present a series of codes written in R Programming Language for data handling, validating and averaging of PM10, PM2.5, and O3 datasets.   Findings: These codes can be used in any types of air pollution studies that seek for PM and ozone concentrations that are indicator of real concentra-tions. We used and combined criteria from several guidelines proposed by US EPA and APHEKOM project to obtain an acceptable methodology. Separate   .csv files for PM 10, PM 2.5 and O3 should be prepared as input file. After the file was imported to the R Programming software, first, negative and zero values of concentrations within all the dataset will be removed. Then, only monitors will be selected that have at least 75% of hourly concentrations. Then, 24-h averages and daily maximum of 8-h moving averages will be calculated for PM and ozone, respectively. For output, the codes create two different sets of data. One contains the hourly concentrations of the interest pollutant (PM10, PM2.5, or O3) in valid stations and their average at city level. Another is the   final 24-h averages of city for PM10 and PM2.5 or the final daily maximum 8-h averages of city for O3. Conclusion: These validated codes use a reliable and valid methodology, and eliminate the possibility of wrong or mistaken data handling and averaging. The use of these codes are free and without any limitation, only after the cita-tion to this article.


2021 ◽  
Vol 13 (1) ◽  
pp. 15
Author(s):  
Junior Pastor Pérez-Molina ◽  
Carola Scholz ◽  
Roy Pérez-Salazar ◽  
Carolina Alfaro-Chinchilla ◽  
Ana Abarca Méndez ◽  
...  

Introduction: The implementation of wastewater treatment systems such as constructed wetlands has a growing interest in the last decade due to its low cost and high effectiveness in treating industrial and residential wastewater. Objective: To evaluate the spatial variation of physicochemical parameters in a constructed wetland system of sub-superficial flow of Pennisetum alopecuroides (Pennisetum) and a Control (unplanted). The purpose is to provide an analysis of spatial dynamic of physicochemical parameters using R programming language. Methods: Each of the cells (Pennisetum and Control) had 12 piezometers, organized in three columns and four rows with a separation distance of 3,25m and 4,35m, respectively. The turbidity, biochemical oxygen demand (BOD), chemical oxygen demand (COD), total Kjeldahl nitrogen (TKN), ammoniacal nitrogen (N-NH4), organic nitrogen (N-org.) and phosphorous (P-PO4-3) were measured in water under in-flow and out-flow of both conditions Control and Pennisetum (n= 8). Additionally, the oxidation-reduction potential (ORP), dissolved oxygen (DO), conductivity, pH and water temperature, were measured (n= 167) in the piezometers. Results: No statistically significant differences between cells for TKN, N-NH4, conductivity, turbidity, BOD, and COD were found; but both Control and Pennisetum cells showed a significant reduction in these parameters (P<0,05). Overall, TKN and N-NH4 removal were from 65,8 to 84,1% and 67,5 to 90,8%, respectively; and decrease in turbidity, conductivity, BOD, and COD, were between 95,1-95,4%; 15-22,4%; 65,2-77,9% and 57,4-60,3% respectively. Both cells showed ORP increasing gradient along the water-flow direction, contrary to conductivity (p<0,05). However, OD, pH and temperature were inconsistent in the direction of the water flow in both cells. Conclusions: Pennisetum demonstrated pollutant removal efficiency, but presented results similar to the control cells, therefore, remains unclear if it is a superior option or not. Spatial variation analysis did not reflect any obstruction of flow along the CWs; but some preferential flow paths can be distinguished. An open-source repository of R was provided. 


2020 ◽  
Vol 11 ◽  
Author(s):  
Maria-Theodora Pandi ◽  
Peter J. van der Spek ◽  
Maria Koromina ◽  
George P. Patrinos

Text mining in biomedical literature is an emerging field which has already been shown to have a variety of implementations in many research areas, including genetics, personalized medicine, and pharmacogenomics. In this study, we describe a novel text-mining approach for the extraction of pharmacogenomics associations. The code that was used toward this end was implemented using R programming language, either through custom scripts, where needed, or through utilizing functions from existing libraries. Articles (abstracts or full texts) that correspond to a specified query were extracted from PubMed, while concept annotations were derived by PubTator Central. Terms that denote a Mutation or a Gene as well as Chemical compound terms corresponding to drug compounds were normalized and the sentences containing the aforementioned terms were filtered and preprocessed to create appropriate training sets. Finally, after training and adequate hyperparameter tuning, four text classifiers were created and evaluated (FastText, Linear kernel SVMs, XGBoost, Lasso, and Elastic-Net Regularized Generalized Linear Models) with regard to their performance in identifying pharmacogenomics associations. Although further improvements are essential toward proper implementation of this text-mining approach in the clinical practice, our study stands as a comprehensive, simplified, and up-to-date approach for the identification and assessment of research articles enriched in clinically relevant pharmacogenomics relationships. Furthermore, this work highlights a series of challenges concerning the effective application of text mining in biomedical literature, whose resolution could substantially contribute to the further development of this field.


Author(s):  
Roger S. Bivand

Abstract Twenty years have passed since Bivand and Gebhardt (J Geogr Syst 2(3):307–317, 2000. 10.1007/PL00011460) indicated that there was a good match between the then nascent open-source R programming language and environment and the needs of researchers analysing spatial data. Recalling the development of classes for spatial data presented in book form in Bivand et al. (Applied spatial data analysis with R. Springer, New York, 2008, Applied spatial data analysis with R, 2nd edn. Springer, New York, 2013), it is important to present the progress now occurring in representation of spatial data, and possible consequences for spatial data handling and the statistical analysis of spatial data. Beyond this, it is imperative to discuss the relationships between R-spatial software and the larger open-source geospatial software community on whose work R packages crucially depend.


Sign in / Sign up

Export Citation Format

Share Document