point data
Recently Published Documents


TOTAL DOCUMENTS

877
(FIVE YEARS 194)

H-INDEX

42
(FIVE YEARS 5)

Author(s):  
Dmitry Kolomenskiy ◽  
Ryo Onishi ◽  
Hitoshi Uehara

Abstract A wavelet-based method for compression of three-dimensional simulation data is presented and its software framework is described. It uses wavelet decomposition and subsequent range coding with quantization suitable for floating-point data. The effectiveness of this method is demonstrated by applying it to example numerical tests, ranging from idealized configurations to realistic global-scale simulations. The novelty of this study is in its focus on assessing the impact of compression on post-processing and restart of numerical simulations. Graphical abstract


2022 ◽  
Author(s):  
Pradnya NP Ghoderao ◽  
Duraisami Dhamodharan ◽  
Hun-Soo Byun

Cloud point data of the 2- and 3-ingredient poly(tridecyl methacrylate) [P(TDMA)] mixture in supercritical CO2 and dimethyl ether (DME) have been obtained experimentally with the variable volume-view cell at a high pressure.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yanjie Li ◽  
He Mao

The rise of big data in the field of education provides an opportunity to solve college students’ growth and development. The establishment of a personalized student management mode based on big data in universities will promote the change of personalized student management from the empirical mode to the scientific mode, from passive response to active warning, from reliance on point data to holistic data, and thus improve the efficiency and quality of personalized student management. In this paper, using the latest ideas and techniques in deep learning such as self-supervised learning and multitask learning, we propose an open-source educational big data pretrained language model F-BERT based on the BERT model architecture. Based on the BERT architecture, F-BERT can effectively and automatically extract knowledge from educational big data and memorize it in the model without modifying the model structure specific to educational big data tasks so that it can be directly applied to various educational big data domain tasks downstream. The experiment demonstrates that Vanilla F-BERT outperformed the two Vanilla BERT-based models, Vanilla BERT and BERT tasks, by 0.0.6 and 0.03 percent, respectively, in terms of accuracy.


MAUSAM ◽  
2021 ◽  
Vol 43 (1) ◽  
pp. 21-28
Author(s):  
P. L. KULKARNI ◽  
D. R. TALWALKAR ◽  
SATHY NAIR ◽  
S. G. NARKHEDKAR ◽  
S. RAJAMANI

In the present study, kinematic divergence computed using ECMWF grid point data at 850 hPa  is enhanced by  using the relationship between OLR and divergence. This new enhanced divergence is used to  compute the velocity potential and then, the divergence part of the wind is obtained from velocity potetial. To obtain the rotational part of wind, we computed the vorticity from wind data, and subsequently stream function and obtained and the rotational part of the wind from the stream function. The total wind is the combination of divergent part obtained from modified velocity potential (using OLR data) and rotational part from unmodified stream function. This total wind field is used as initial guess for univariate objective analysis by optimum interpolation scheme so that Initial Guess field contained the more realistic divergent part of the wind. Consequently, the analysed field also will contain the divergent part of the wind.


Atmosphere ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 49
Author(s):  
Nikolay V. Abasov ◽  
Viacheslav M. Nikitin ◽  
Tamara V. Berezhnykh ◽  
Evgeny N. Osipchuk

The paper is concerned with a methodological approach to monitoring the state of atmospheric parameters in the catchment area of Lake Baikal, including real-time analysis of actual distributed data with the determination of analog years according to the preset proximity of comparative indicators and the most probable long-term predictive distributions of surface temperatures, precipitation, pressure, and geopotential with a lead time of up to 9–12 months. We have developed the information-analytical system GeoGIPSAR to conduct the real-time analysis of spatial and point data by various processing methods and obtain long-term prognostic estimates of water inflow into the lake.


2021 ◽  
Author(s):  
SATYAJIT DAS ◽  
DIPESH ROY ◽  
RAJIB MITRA

Abstract Several natural disasters are taking place on the earth, and landslide is one of them. Darjeeling Himalaya is one of the world's young fold mountainous area, often suffering from landslide hazards. Hence, the study identifies the landslide susceptibility zone in the Ragnu Khola river basin of the Darjeeling Himalayan region by applying the geospatial-based MCDM technique. This research's major goal is to identify whether this GIS-based multi-criteria decision-making (MCDM) technique is validated or not for landslide susceptibility zones (LSZ); if validated, then how much manifest for describing the LSZ in the study area. MCDM evaluation applies to determining weight value to integrate different thematic layers of river morphometry like Drainage Diversity (DD) parameters and Relief Diversity (RD) parameters. Both DD and RD have significant impacts on landslide intensity. Hence, both layers are combined using the analytical hierarchy process (AHP) of the MCDM technique for the final LSZ. The final result has been validated by ROC analysis using landslide occurring point data obtained from the Geological Survey of India (GSI). The outcome of the study shows that1.45% and 17.83% areas of the region fall in 'very high' and ‘high' LSZ, which belongs to near Mull Gaon, Sanchal forest, and Alubri basty. Most of the area (47.70%) is observed in 'moderate' LSZ. Only 1.32% and 31.7% are kept in ‘very low’ and ‘low’ LSZ, respectively, through the study area. The description capability of the technique for LSZ is significant as the area under the curve (AUC) is 72.10%. The validation of the study using the frequency density of the landslides (FDL) also indicates the 'very high' LSZ is associated with the maximum (2.19/km2) FDL. The work will be needful to develop the overall socio-economic condition of such kind of tectonically sensitive region by proper effective planning.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Szymon Zieliński ◽  
Stanisław Kostecki ◽  
Paweł Stefanek

Abstract The mining of underground deposits causes the inflow of water to workings and the necessity of pumping them to the surface. The mining plant of KGHM Polska Miedź S.A. extracts copper ore in plant branches with different hydrogeological conditions. The inflowing water into the workings is characterised by variable mineralisation, which depends on the location of the branch. In the south-western part of the deposit, a low-mineralised stream with a relatively high flow rate can be observed, while the outflow of highly saline waters occurs in the north-eastern branch. Despite the activities undertaken that aim at using the pumped-off mine waters industrially, it is necessary to deposit them into the Odra River. Reducing the environmental impact on the Odra River is one of KGHM's goals, which is being implemented by stabilising its salt concentration at a safe level. The paper presents the results of a 3D simulation of brine plume propagation based on a numerical model of advection–diffusion and turbulent flow. Bathymetric data from a section of the river approximately 500 m long and point data from an Odra water quality test were used to develop and validate the model. The paper discusses the types of factors that minimise the impact of brine discharge. The developed model will be used in the future to propose solutions that accelerate the mixing of mine waters with the waters of the Odra River.


2021 ◽  
Vol 13 (24) ◽  
pp. 5135
Author(s):  
Yahya Alshawabkeh ◽  
Ahmad Baik ◽  
Ahmad Fallatah

The work described in the paper emphasizes the importance of integrating imagery and laser scanner techniques (TLS) to optimize the geometry and visual quality of Heritage BIM. The fusion-based workflow was approached during the recording of Zee Ain Historical Village in Saudi Arabia. The village is a unique example of traditional human settlements, and represents a complex natural and cultural heritage site. The proposed workflow divides data integration into two levels. At the basic level, UAV photogrammetry with enhanced mobility and visibility is used to map the ragged terrain and supplement TLS point data in upper and unaccusable building zones where shadow data originated. The merging of point clouds ensures that the building’s overall geometry is correctly rebuilt and that data interpretation is improved during HBIM digitization. In addition to the correct geometry, texture mapping is particularly important in the area of cultural heritage. Constructing a realistic texture remains a challenge in HBIM; because the standard texture and materials provided in BIM libraries do not allow for reliable representation of heritage structures, mapping and sharing information are not always truthful. Thereby, at the second level, the workflow proposed true orthophoto texturing method for HBIM models by combining close-range imagery and laser data. True orthophotos have uniform scale that depicts all objects in their respective planimetric positions, providing reliable and realistic mapping. The process begins with the development of a Digital Surface Model (DSM) by sampling TLS 3D points in a regular grid, with each cell uniquely associated with a model point. Then each DSM cell is projected in the corresponding perspective imagery in order to map the relevant spectral information. The methods allow for flexible data fusion and image capture using either a TLS-installed camera or a separate camera at the optimal time and viewpoint for radiometric data. The developed workflows demonstrated adequate results in terms of complete and realistic textured HBIM, allowing for a better understanding of the complex heritage structures.


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Ngoc Quy BUI ◽  
Dinh Hien LE ◽  
Anh Quan DUONG ◽  
Quoc Long NGUYEN

LiDAR technology has been widely adopted as a proper method for land cover classification.Recently with the development of technology, LiDAR systems can now capture high-resolutionmultispectral bands images with high-density LiDAR point cloud simultaneously. Therefore, it opens newopportunities for more precise automatic land-use classification methods by utilizing LiDAR data. Thisarticle introduces a combining technique of point cloud classification algorithms. The algorithms includeground detection, building detection, and close point classification - the classification is based on pointclouds’ attributes. The main attributes are heigh, intensity, and NDVI index calculated from 4 bands ofcolors extracted from multispectral images for each point. Data of the Leica City Mapper LiDAR systemin an area of 80 ha in Quang Xuong town, Thanh Hoa province, Vietnam was used to deploy theclassification. The data is classified into eight different types of land use consist of asphalt road, otherground, low vegetation, medium vegetation, high vegetation, building, water, and other objects. Theclassification workflow was implemented in the TerraSolid suite, with the result of the automation processcame out with 97% overall accuracy of classification points. The


Sign in / Sign up

Export Citation Format

Share Document