data model
Recently Published Documents


TOTAL DOCUMENTS

5455
(FIVE YEARS 1444)

H-INDEX

64
(FIVE YEARS 11)

2022 ◽  
Vol 167 ◽  
pp. 108526
Author(s):  
Tianmei Li ◽  
Xiaosheng Si ◽  
Hong Pei ◽  
Li Sun
Keyword(s):  

2022 ◽  
pp. 9-35
Author(s):  
Badi H. Baltagi ◽  
Sophia Ding ◽  
Peter H. Egger

2022 ◽  
Author(s):  
Julia C. Tindall ◽  
Alan M. Haywood ◽  
Ulrich Salzmann ◽  
Aisling M. Dolan ◽  
Tamara Fletcher

Abstract. Reconciling palaeodata with model simulations of the Pliocene climate is essential for understanding a world with atmospheric CO2 concentration near 400 parts per million by volume. Both models and data indicate an amplified warming of the high latitudes during the Pliocene, however terrestrial data suggests Pliocene high latitude temperatures were much higher than can be simulated by models. Here we show that understanding the Pliocene high latitude terrestrial temperatures is particularly difficult for the coldest months, where the temperatures obtained from models and different proxies can vary by more than 20 °C. We refer to this mismatch as the ‘warm winter paradox’. Analysis suggests the warm winter paradox could be due to a number of factors including: model structural uncertainty, proxy data not being strongly constrained by winter temperatures, uncertainties on data reconstruction methods and also that the Pliocene high latitude climate does not have a modern analogue. Refinements to model boundary conditions or proxy dating are unlikely to contribute significantly to the resolution of the warm winter paradox. For the Pliocene, high latitude, terrestrial, summer temperatures, models and different proxies are in good agreement. Those factors which cause uncertainty on winter temperatures are shown to be much less important for the summer. Until some of the uncertainties on winter, high latitude, Pliocene temperatures can be reduced, we suggest a data-model comparison should focus on the summer. This is expected to give more meaningful and accurate results than a data-model comparison which focuses on the annual mean.


Author(s):  
A. Zamzuri ◽  
I. Hassan ◽  
A. Abdul Rahman

Abstract. A new version of the Land Administration Domain Model (LADM) has been discussed and is under further development in ISO/TC 211 on Geographic Information. One of the extending parts is where the model can accommodate complex and advanced marine properties and cadastral objects. Currently, the fundamentals part of this new version (LADM Edition II) has been examined by the committee, and a few elements need to be considered, especially for marine space georegulation. Based on the possibility of embedding LADM with marine cadastre as agreed by several researchers, the concept of marine cadastre data model within land administration context has been anticipated in many countries (e.g., Canada, Greece, Turkey, Australia, and Malaysia). Part of the research focused on constructing and developing the appropriate data models to manage marine spaces and resources most effectively. Several studies have attempted to establish a conceptual model for marine cadastre in Malaysia. However, there is still no acceptable marine data model. Thus, this paper proposed a marine data model for Malaysia based on the international standard, LADM. The approach, by definition, can be applied to the marine environment in terms of controlling and modelling a variety of rights, responsibilities, and restrictions. The Unified Modelling Language (UML) application was utilized to construct the conceptual and technical models via Enterprise Architect as part of the validation process. The data model was constructed within the marine's concept in Malaysia to meet international standards. The features of the data model were also discussed in the FIG workshop (9th LADM International Workshop 2021). The experiment on the data model also includes 3D visualization and simple query.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Maowen Hou ◽  
Weiyun Wang

Sensors are an important tool to quantify the changes and an important part of the information acquisition system; the performance and accuracy of sensors are more strictly desired. In this paper, a highly sensitive fiber optic sensor for measuring temperature and refractive index is prepared by using femtosecond laser micromachining technology and fiber fusion technology. The multimode fiber is first spliced together with single-mode fiber in a positive pair, and then, the multimode fiber is perforated using a femtosecond laser. The incorporation of data model sensors has led to a rapid increase in the development and application of sensors as well. Based on the design concept and technical approach of the wireless sensor network system, a general development plan of the indoor environmental monitoring system is proposed, including the system architecture and functional definition, wireless communication protocols, and design methods of node applications. The sensor has obvious advantages over traditional electrical sensors; the sensor is resistant to electromagnetic interference, electrical insulation, corrosion resistance, low loss, small size, high accuracy, and other advantages. The upper computer program of the indoor environment monitoring system was developed in a Visual Studio development environment using C# language to implement the monitoring, display, and alarm functions of the indoor environment monitoring system network. The sensor-data model interfusion with each other for mutual integration performs the demonstration of the application.


Author(s):  
Yongjie Zhu ◽  
Youcheng Li

For a long time, there are a large number of heterogeneous databases on the network, and their heterogeneity is manifested in many aspects. With the development of enterprise informatization and e-government, the system database of each department constitutes a real heterogeneous database framework with its independence and autonomy in the network system of many different functional departments. This paper will design information sharing between heterogeneous databases of network database system of many similar functional departments by using XML data model. The solution of data sharing between heterogeneous databases can accelerate the integration of information systems with departments and businesses as the core among enterprises, form a broader and more efficient organic whole, improve the speed of business processing, broaden business coverage, and strengthen cooperation and exchange among enterprises. In addition, heterogeneous database sharing can avoid the waste of data resources caused by the heterogeneity of database, and promote the availability rate of data resources. Due to the advantages of XML data model, the system has good scalability.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Sheeba Samuel ◽  
Birgitta König-Ries

Abstract Background The advancement of science and technologies play an immense role in the way scientific experiments are being conducted. Understanding how experiments are performed and how results are derived has become significantly more complex with the recent explosive growth of heterogeneous research data and methods. Therefore, it is important that the provenance of results is tracked, described, and managed throughout the research lifecycle starting from the beginning of an experiment to its end to ensure reproducibility of results described in publications. However, there is a lack of interoperable representation of end-to-end provenance of scientific experiments that interlinks data, processing steps, and results from an experiment’s computational and non-computational processes. Results We present the “REPRODUCE-ME” data model and ontology to describe the end-to-end provenance of scientific experiments by extending existing standards in the semantic web. The ontology brings together different aspects of the provenance of scientific studies by interlinking non-computational data and steps with computational data and steps to achieve understandability and reproducibility. We explain the important classes and properties of the ontology and how they are mapped to existing ontologies like PROV-O and P-Plan. The ontology is evaluated by answering competency questions over the knowledge base of scientific experiments consisting of computational and non-computational data and steps. Conclusion We have designed and developed an interoperable way to represent the complete path of a scientific experiment consisting of computational and non-computational steps. We have applied and evaluated our approach to a set of scientific experiments in different subject domains like computational science, biological imaging, and microscopy.


Sign in / Sign up

Export Citation Format

Share Document