database systems
Recently Published Documents


TOTAL DOCUMENTS

3293
(FIVE YEARS 338)

H-INDEX

66
(FIVE YEARS 6)

2022 ◽  
Vol 12 (2) ◽  
pp. 628
Author(s):  
Fei Yang ◽  
Zhonghui Wang ◽  
Haowen Yan ◽  
Xiaomin Lu

Geometric similarity plays an important role in geographic information retrieval, map matching, and data updating. Many approaches have been developed to calculate the similarity between simple features. However, complex group objects are common in map and spatial database systems. With a micro scene that contains different types of geographic features, calculating similarity is difficult. In addition, few studies have paid attention to the changes in a scene’s geometric similarity in the process of generalization. In this study, we developed a method for measuring the geometric similarity of micro scene generalization based on shape, direction, and position. We calculated shape similarity using the hybrid feature description, and we constructed a direction Voronoi diagram and a position graph to measure the direction similarity and position similarity. The experiments involved similarity calculation and quality evaluation to verify the usability and effectiveness of the proposed method. The experiments showed that this approach can be used to effectively measure the geometric similarity between micro scenes. Moreover, the proposed method accounts for the relationships amongst the geometrical shape, direction, and position of micro scenes during cartographic generalization. The simplification operation leads to obvious changes in position similarity, whereas delete and merge operations lead to changes in direction and position similarity. In the process of generalization, the river + islands scene changed mainly in shape and position, the similarity change in river + lakes occurred due to the direction and location, and the direction similarity of rivers + buildings and roads + buildings changed little.


2022 ◽  
Vol 2146 (1) ◽  
pp. 012030
Author(s):  
Juan Guo

Abstract With the rapid development of computer technology, the importance of database systems as an indispensable part of information systems is becoming more and more prominent. And nowadays, the society has been increasingly using modern means to program databases. Database is a large and complex, huge amount of data and has a certain structure and independence of the important system, its programming requires certain technical means, the author will be in the text for the database programming involved in the key technology to explain.


2022 ◽  
pp. 1614-1633
Author(s):  
Vellingiri Jayagopal ◽  
Basser K. K.

The internet is creating 2.5 quintillion bytes of data, and according to the statistics, the percentage of data that has been generated from last two years is 90%. This data comes from many industries like climate information, social media sites, digital images and videos, and purchase transactions. This data is big data. Big data is the data that exceeds storage and processing capacity of conventional database systems. Data in today's world (big data) is usually unstructured and qualitative in nature and can be used for various applications like sentiment analysis, increasing business, etc. About 80% of data captured today is unstructured. All this data is also big data.


Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 52
Author(s):  
Wenze Zhao ◽  
Yajuan Du ◽  
Mingzhe Zhang ◽  
Mingyang Liu ◽  
Kailun Jin ◽  
...  

With the advantage of faster data access than traditional disks, in-memory database systems, such as Redis and Memcached, have been widely applied in data centers and embedded systems. The performance of in-memory database greatly depends on the access speed of memory. With the requirement of high bandwidth and low energy, die-stacked memory (e.g., High Bandwidth Memory (HBM)) has been developed to extend the channel number and width. However, the capacity of die-stacked memory is limited due to the interposer challenge. Thus, hybrid memory system with traditional Dynamic Random Access Memory (DRAM) and die-stacked memory emerges. Existing works have proposed to place and manage data on hybrid memory architecture in the view of hardware. This paper considers to manage in-memory database data in hybrid memory in the view of application. We first perform a preliminary study on the hotness distribution of client requests on Redis. From the results, we observe that most requests happen on a small portion of data objects in in-memory database. Then, we propose the Application-oriented Data Migration called ADM to accelerate in-memory database on hybrid memory. We design a hotness management method and two migration policies to migrate data into or out of HBM. We take Redis under comprehensive benchmarks as a case study for the proposed method. Through the experimental results, it is verified that our proposed method can effectively gain performance improvement and reduce energy consumption compared with existing Redis database.


Author(s):  
Yihao Tian

Data management is an administrative mechanism that involves the acquisitions, validations, storage, protection, and processing of data needed by its users to ensure that data are accessible, reliable, and timely. It is a challenging task to manage protections for information properties. With the emphasis on distributed systems and Internet-accessible systems, the need for efficient information security management is increasingly important. In the paper, artificial intelligence-assisted dynamic modeling (AI-DM) is used for data management in a distributed system. Distributed processing is an effective way to enhance the efficiency of database systems. Therefore, each distributed database structure’s functionality depends significantly on its proper architecture in implementing fragmentation, allocation, and replication processes. The proposed model is a dynamically distributed internet database architecture. This suggested model enables complex decision-making on fragmentation, distribution, and duplication. It provides users with links from anywhere to the distributed database. AI-DM has an improved allocation and replication strategy where no query performance information is accessible at the initial stage of the distributed database design. AI-DM findings show that the proposed database model leads to the reliability and efficiency of the enhanced system. The final results are obtained by analyzing the dynamic modeling ratio is 87.6%, increasing decision support ratio is 88.7%, the logistic regression ratio is 84.5%, the data reliability ratio is 82.2%, and the system ratio is 93.8%.


2021 ◽  
Author(s):  
Alejandro Figar Gutierrez ◽  
Jorge Anibal Martinez Garbino ◽  
Valeria Burgos ◽  
Taimoore Rajah ◽  
Marcelo Risk ◽  
...  

Healthcare has become one of the most important emerging application areas of blockchain technology.[1] Although the use of a cryptographic ledger within Anesthesia Information Management Systems (AIMS) remains uncertain. The need for a truly immutable anesthesia record is yet to be established, given that the current AIMS database systems have reliable audit capabilities. Adoption of AIMS has followed Roger's 1962 formulation of the theory of diffusion of innovation. Between 2018 and 2020, adoption was expected to be the 84% of U.S. academic anesthesiology departments.[2] Larger anesthesiology groups with large caseloads, urban settings, and government affiliated or academic institutions are more likely to adopt and implement AIMS solutions, due to the substantial amount of financial resources and dedicated staff to support both the implementation and maintenance that are required. As health care dollars become more scarce, this is the most frequently cited constraint in the adoption and implementation of AIMS.[3] We propose the use of a blockchain database for saving all incoming data from multiparametric monitors at the operating theatre. We present a proof of concept of the use of this technology for electronic anesthesia records even in the absence of an AIMS at site. In this paper we shall discuss its plausibility as well as its feasibility. The Electronic medical records (EMR) in AIMS might contain errors and artifacts that may (or may not) have to be dealt with. Making them immutable is a scary concept. The use of the blockchain for saving raw data directly from medical monitoring equipment and devices in the operating theatre has to be further investigated.


Author(s):  
Lucas Woltmann ◽  
Peter Volk ◽  
Michael Dinzinger ◽  
Lukas Gräf ◽  
Sebastian Strasser ◽  
...  

AbstractFor its third installment, the Data Science Challenge of the 19th symposium “Database Systems for Business, Technology and Web” (BTW) of the Gesellschaft für Informatik (GI) tackled the problem of predictive energy management in large production facilities. For the first time, this year’s challenge was organized as a cooperation between Technische Universität Dresden, GlobalFoundries, and ScaDS.AI Dresden/Leipzig. The Challenge’s participants were given real-world production and energy data from the semiconductor manufacturer GlobalFoundries and had to solve the problem of predicting the energy consumption for production equipment. The usage of real-world data gave the participants a hands-on experience of challenges in Big Data integration and analysis. After a leaderboard-based preselection round, the accepted participants presented their approach to an expert jury and audience in a hybrid format. In this article, we give an overview of the main points of the Data Science Challenge, like organization and problem description. Additionally, the winning team presents its solution.


Author(s):  
Goncalo Carvalho ◽  
Jorge Bernardino ◽  
Vasco Pereira ◽  
Bruno Cabral

2021 ◽  
Author(s):  
Mohamad Mustaqim Mokhlis ◽  
Nurdini Alya Hazali ◽  
Muhammad Firdaus Hassan ◽  
Mohd Hafiz Hashim ◽  
Afzan Nizam Jamaludin ◽  
...  

Abstract In this paper we will present a process streamlined for well-test validation that involves data integration between different database systems, incorporated with well models, and how the process can leverage real-time data to present a full scope of well-test analysis to enhance the capability for assessing well-test performance. The workflow process demonstrates an intuitive and effective way for analyzing and validating a production well test via an interactive digital visualization. This approach has elevated the quality and integrity of the well-test data, as well as improved the process cycle efficiency that complements the field surveillance engineers to keep track of well-test compliance guidelines through efficient well-test tracking in the digital interface. The workflow process involves five primary steps, which all are conducted via a digital platform: Well Test Compliance: Planning and executing the well test Data management and integration Well Test Analysis and Validation: Verification of the well test through historical trending, stability period checks, and well model analysis Model validation: Correcting the well test and calibrating the well model before finalizing the validity of the well test Well Test Re-testing: Submitting the rejected well test for retesting and final step Integrating with corporate database system for production allocation This business process brings improvement to the quality of the well test, which subsequently lifts the petroleum engineers’ confidence level to analyze well performance and deliver accurate well-production forecasting. A well-test validation workflow in a digital ecosystem helps to streamline the flow of data and system integration, as well as the way engineers assess and validate well-test data, which results in minimizing errors and increases overall work efficiency.


Sign in / Sign up

Export Citation Format

Share Document