scholarly journals Processing Archive Information in Digital University

2020 ◽  
Vol 35 ◽  
pp. 02003
Author(s):  
Alexander V. Baldin ◽  
Dmitriy V. Eliseev

The article discusses methods of processing and storing data archive used in the digital university. Disadvantages of these methods are found. As a result, a fundamentally new method of processing and storing information archive in a constantly changing scheme database is proposed. This method uses mivar technologies. The multidimensional space structure has been developed to store the data archive. This multidimensional space describes the temporal relational model. For processing data, archive is proposed scheme for selecting the subspace and converting it into relations. A method of transformation of relational databases into multidimensional mivar space for efficient execution of operations on temporal data with changing structure is proposed. The transition to a multidimensional space allows us to describe the process of changing temporal data and their structure in a unified way. As a result, the time required to adapt the database schema and the redundancy of information storage are reduced. The results of this work are used in the human resource management database of BMSTU.

2019 ◽  
Vol 8 (3) ◽  
pp. 7753-7758

The article presents an adaptable data model based on multidimensional space. The main difference between a multidimensional data representation and a table representation used in relational Database Management Systems (DBMSs) is that it is possible to add new elements to sets defining the axes of multidimensional space at any time. This changes the data model. The tabular representation of the relational model does not allow you to change the model itself during the operation of an automated system. Three levels of multidimensional data presentation space are considered. There are axis of multidimensional space, the Cartesian product of the sets of axis values and the values of space points. The five axes of multidimensional space defined in the article (entities, attributes, identifiers, time, modifiers) are basic for the design of an adaptable automated system. It is shown that it is possible to use additional axes for greater granularity of the stored data. The multidimensional space structure defined in the article for an adaptable data model is a flexible set for storing a relational domain model. Two types of operations in multidimensional information space are defined. Relations of the relational model are formed dynamically depending on the conditions imposed on the coordinates of the points. Thus, an adaptable data representation model based on multidimensional space can be used to create flexible dynamic automated information systems.


2014 ◽  
Vol 41 (6) ◽  
pp. 499 ◽  
Author(s):  
David J. Will ◽  
Karl J. Campbell ◽  
Nick D. Holmes

Context Worldwide, invasive vertebrate eradication campaigns are increasing in scale and complexity, requiring improved decision making tools to achieve and validate success. For managers of these campaigns, gaining access to timely summaries of field data can increase cost-efficiency and the likelihood of success, particularly for successive control-event style eradications. Conventional data collection techniques can be time intensive and burdensome to process. Recent advances in digital tools can reduce the time required to collect and process field information. Through timely analysis, efficiently collected data can inform decision making for managers both tactically, such as where to prioritise search effort, and strategically, such as when to transition from the eradication phase to confirmation monitoring. Aims We highlighted the advantages of using digital data collection tools, particularly the potential for reduced project costs through a decrease in effort and the ability to increase eradication efficiency by enabling explicit data-informed decision making. Methods We designed and utilised digital data collection tools, relational databases and a suite of analyses during two different eradication campaigns to inform management decisions: a feral cat eradication utilising trapping, and a rodent eradication using bait stations. Key results By using digital data collection during a 2-year long cat eradication, we experienced an 89% reduction in data collection effort and an estimated USD42 845 reduction in total costs compared with conventional paper methods. During a 2-month rodent bait station eradication, we experienced an 84% reduction in data collection effort and an estimated USD4525 increase in total costs. Conclusions Despite high initial capital costs, digital data collection systems provide increasing economics as the duration and scale of the campaign increases. Initial investments can be recouped by reusing equipment and software on subsequent projects, making digital data collection more cost-effective for programs contemplating multiple eradications. Implications With proper pre-planning, digital data collection systems can be integrated with quantitative models that generate timely forecasts of the effort required to remove all target animals and estimate the probability that eradication has been achieved to a desired level of confidence, thus improving decision making power and further reducing total project costs.


Author(s):  
D. J. RANDALL ◽  
H. J. HAMILTON ◽  
R. J. HILDERMAN

This paper addresses the problem of using domain generalization graphs to generalize temporal data extracted from relational databases. A domain generalization graph associated with an attribute defines a partial order which represents a set of generalization relations for the attribute. We propose formal specifications for domain generalization graphs associated with calendar (date and time) attributes. These graphs are reusable (i.e. can be used to generalize any calendar attributes), adaptable (i.e. can be extended or restricted as appropriate for particular applications), and transportable (i.e. can be used with any database containing a calendar attribute).


2021 ◽  
Author(s):  
Naveen Kunnathuvalappil Hariharan

Financial data volumes are increasing, and this appears to be a long-term trend, implying that data managementdevelopment will be crucial over the next few decades. Because financial data is sometimes real-time data, itis constantly generated, resulting in a massive amount of financial data produced in a short period of time.The volume, diversity, and velocity of Big Financial Data are highlighting the significant limitations oftraditional Data Warehouses (DWs). Their rigid relational model, high scalability costs, and sometimesinefficient performance pave the way for new methods and technologies. The majority of the technologiesused in background processing and storage research were previously the subject of research in their earlystages. The Apache Foundation and Google are the two most important initiatives. For dealing with largefinancial data, three techniques outperform relational databases and traditional ETL processing: NoSQL andNewSQL storage, and MapReduce processing.


2008 ◽  
pp. 187-207 ◽  
Author(s):  
Z.. M. Ma

Fuzzy set theory has been extensively applied to extend various data models and resulted in numerous contributions, mainly with respect to the popular relational model or to some related form of it. To satisfy the need of modeling complex objects with imprecision and uncertainty, recently many researches have been concentrated on fuzzy semantic (conceptual) and object-oriented data models. This chapter reviews fuzzy database modeling technologies, including fuzzy conceptual data models and database models. Concerning fuzzy database models, fuzzy relational databases, fuzzy nested relational databases, and fuzzy object-oriented databases are discussed, respectively.


Author(s):  
Antonio Sarasa-Cabezuelo

The appearance of the “big data” phenomenon has meant a change in the storage and information processing needs. This new context is characterized by 1) enormous amounts of information are available in heterogeneous formats and types, 2) information must be processed almost in real time, and 3) data models evolve periodically. Relational databases have limitations to respond to these needs in an optimal way. For these reasons, some companies such as Google or Amazon decided to create new database models (different from the relational model) that solve the needs raised in the context of big data without the limitations of relational databases. These new models are the origin of the so-called NonSQL databases. Currently, NonSQL databases have been constituted as an alternative mechanism to the relational model and its use is widely extended. The main objective of this chapter is to introduce the NonSQL databases.


Author(s):  
Andreea Sabau

In order to represent spatio-temporal data, many conceptual models have been designed and a part of them have been implemented. This chapter describes an approach of the conceptual modeling of spatio-temporal data, called 3SST. Also, the spatio-temporal conceptual and relational data models obtained by following the proposed phases are presented. The 3SST data model is obtained by following three steps: the construction of an entity-relationship spatio-temporal model, the specification of the domain model and the design of a class diagram which includes the objects characteristic to a spatiotemporal application and other needed elements. The relational model of the 3SST conceptual model is the implementation of the conceptual 3SST data model on a relational database platform. Both models are characterized by generality in representing spatial, temporal and spatio-temporal data. The spatial objects can be represented as points or objects with shape and the evolution of the spatio-temporal objects can be implemented as discrete or continuous in time, on time instants or time intervals. More than that, different types of spatial, temporal, spatio-temporal and event-based queries can be performed on represented data. Therefore, the proposed 3SST relational model can be considered the core of a spatio-temporal data model.


2009 ◽  
pp. 105-125 ◽  
Author(s):  
Z.M. Ma

Fuzzy set theory has been extensively applied to extend various data models and resulted in numerous contributions, mainly with respect to the popular relational model or to some related form of it. To satisfy the need of modeling complex objects with imprecision and uncertainty, recently many researches have been concentrated on fuzzy semantic (conceptual) and object-oriented data models. This chapter reviews fuzzy database modeling technologies, including fuzzy conceptual data models and database models. Concerning fuzzy database models, fuzzy relational databases, fuzzy nested relational databases, and fuzzy object-oriented databases are discussed, respectively.


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Krystal S. Collier ◽  
Sophia Crossen ◽  
Courtney Fitzgerald ◽  
Kaitlyn Ciampaglio ◽  
Lakshmi Radhakrishnan ◽  
...  

ObjectiveThe National Syndromic Surveillance Program (NSSP) Community of Practice (CoP) works to support syndromic surveillance by providing guidance and assistance to help resolve data issues and foster relationships between jurisdictions, stakeholders, and vendors. During this presentation, we will highlight the value of collaboration through the International Society for Disease Surveillance (ISDS) Data Quality Committee (DQC) between jurisdictional sites conducting syndromic surveillance, the Centers for Disease Control and Prevention’s (CDC) NSSP, and electronic health record (EHR) vendors when vendor-specific errors are identified, using a recent incident to illustrate and discuss how this collaboration can work to address suspected data anomalies.IntroductionOn November 20, 2017, several sites participating in the NSSP reported anomalies in their syndromic data. Upon review, it was found that between November 17-18, an EHR vendor’s syndromic product experienced an outage and errors in processing data. The ISDS DQC, NSSP, a large EHR vendor, and many of the affected sites worked together to identify the core issues, evaluate ramifications, and formulate solutions to provide to the entire NSSP CoP.DescriptionOn November 20, 2017, several sites participating in the NSSP reported anomalies in their syndromic data. Upon review, it was found that between November 17-18, an EHR vendor’s syndromic product experienced an outage and errors in processing data. The ISDS DQC, NSSP, a large EHR vendor, and many of the affected sites worked together to identify the core issues, evaluate ramifications, and formulate solutions to provide to the entire NSSP CoP.How the Moderator Intends to Engage the Audience in Discussions on the TopicFollowing presentation of this information, the presenters will lead a discussion on how to improve the response, provide resolution, communicate expectations, and decrease the time required to resolve issues should a similar event happen in the future. Participants from all three stakeholder groups, sites conducting syndromic surveillance, the NSSP, and vendor representatives, will be invited to share their experiences, successes, and concerns.


2002 ◽  
Vol 40 (1) ◽  
pp. 55-64
Author(s):  
Saran Akram Abd Al-Majeed

There has been a great deal of discussion about null values in relational databases. The relational model was defined in 1969, and Nulls Was died in 1979. Unfortunately, there is not a generally agreeable solution for rull values problem. Null is a special marker which stands for a value undefined or unknown, which means thut ne entry has been made, a missing valuc mark is not a value and not of a date type and cannot be treated as a value by Database Management System (DBMS). As we know, distributed database users are more than a single database and data will be distributed among several data sources or sites, it must be precise data, the replication is allowed there, so complex problems will appear, then there will be need for perfect practical general approaches for treatment of Nulls. A distributed database system is designed, that is "Hotel reservation control system, based on different data sources at four site, each site is represented as a Hotel, for more heterogeneity different application programming languages there are five practical approaches, designed with their rules and algorithms for Null values treatment through the distributed database sites. (1), (2), (3). 14). 15), (9).


Sign in / Sign up

Export Citation Format

Share Document