Normalizing Multimedia Databases

Author(s):  
Shi Kuo Chang ◽  
Vincenzo Deufemia ◽  
Giuseppe Polese

Multimedia databases have been used in many application fields. As opposed to traditional alphanumeric databases, they need enhanced data models and DBMSs to enable the modeling and management of complex data types. After an initial anarchy, multimedia DBMSs (MMDBMS) have been classified based on standard issues, such as the supported data model, the indexing techniques to support content-based retrieval, the query language, the support for distributed multimedia information management, and the flexibility of their architecture (Narasimhalu, 1996).

Author(s):  
A. J. Jara ◽  
Y. Bocchi ◽  
D. Fernandez ◽  
G. Molina ◽  
A. Gomez

Smart Cities requires the support of context-aware and enriched semantic descriptions to support a scalable and cross-domain development of smart applications. For example, nowadays general purpose sensors such as crowd monitoring (counting people in an area), environmental information (pollution, air quality, temperature, humidity, noise) etc. can be used in multiple solutions with different objectives. For that reason, a data model that offers advanced capabilities for the description of context is required. This paper presents an overview of the available technologies for this purpose and how it is being addressed by the Open and Agile Smart Cities principles and FIWARE platform through the data models defined by the ETSI ISG Context Information Management (ETSI CIM).


Author(s):  
Shi Kuo Chang ◽  
Vincenzo Deufemia ◽  
Giuseppe Polese

In this chapter we present normal forms for the design of multimedia database schemes with reduced manipulation anomalies. To this aim we first discuss how to describe the semantics of multimedia attributes based upon the concept of generalized icons, already used in the modeling of multimedia languages. Then, we introduce new extended dependencies involving different types of multimedia data. Such dependencies are based on domain specific similarity measures that are used to detect semantic relationships between complex data types. Based upon these new dependencies, we have defined five normal forms for multimedia databases, some focusing on the level of segmentation of multimedia attributes, others on the level of fragmentation of tables.


Author(s):  
Ying Deng ◽  
Paeter Revesz

Spatial and topological data models are increasingly important in business applications such as urban development planning, transportation and traffic control, decision support in agriculture, pollution and environment analysis, fire and flood prevention, etc. that require handling spatial and topological data more efficiently and more effectively than older models, for example the relational data model. In this survey we compare several alternative spatial and topological data models: the Spaghetti Data Model, the Vague Region Data Model, the Topological Data Model, Worboys’ Spatiotemporal Data Model and the Constraint Data Model. We first describe how spatial and/or topological data are represented and give examples for each data model. We also illustrate by examples the use of an appropriate query language for each data model.


2012 ◽  
Vol 03 (03) ◽  
pp. 276-289 ◽  
Author(s):  
J. Kenneweg ◽  
F. Fritz ◽  
P. Bruland ◽  
D. Doods ◽  
B. Trinczek ◽  
...  

SummaryBackground: Semantic interoperability between routine healthcare and clinical research is an unsolved issue, as information systems in the healthcare domain still use proprietary and site-specific data models. However, information exchange and data harmonization are essential for physicians and scientists if they want to collect and analyze data from different hospitals in order to build up registries and perform multicenter clinical trials. Consequently, there is a need for a standardized metadata exchange based on common data models. Currently this is mainly done by informatics experts instead of medical experts.Objectives: We propose to enable physicians to exchange, rate, comment and discuss their own medical data models in a collaborative web-based repository of medical forms in a standardized format.Methods: Based on a comprehensive requirement analysis, a web-based portal for medical data models was specified. In this context, a data model is the technical specification (attributes, data types, value lists) of a medical form without any layout information. The CDISC Operational Data Model (ODM) was chosen as the appropriate format for the standardized representation of data models. The system was implemented with Ruby on Rails and applies web 2.0 technologies to provide a community based solution. Forms from different source systems – both routine care and clinical research – were converted into ODM format and uploaded into the portal.Results: A portal for medical data models based on ODM-files was implemented (http://www.medical-data-models.org). Physicians are able to upload, comment, rate and download medical data models. More than 250 forms with approximately 8000 items are provided in different views (overview and detailed presentation) and in multiple languages. For instance, the portal contains forms from clinical and research information systems.Conclusion: The portal provides a system-independent repository for multilingual data models in ODM format which can be used by physicians. It serves as a platform for discussion and enables the exchange of multilingual medical data models in a standardized way.


Author(s):  
Antonio Badia

The relational data model is the dominant paradigm in the commercial database market today, and it has been for several years. However, there have been challenges to the model over the years, and they have influenced its evolution and that of database technology. The object-oriented revolution that got started in programming languages arrived to the database area in the form of a brand new data model. The relational model managed not only to survive the newcomer but to continue becoming a dominant force, transformed into the object-relational model (also called extended relational, or universal) and relegating object-oriented databases to a niche product. Although this market has many nontechnical aspects, there are certainly important technical differences among the mentioned data models. In this article I describe the basic components of the relational, object-oriented, and object-relational data models. I do not, however, discuss query language, implementation, or system issues. A basic comparison is given and then future trends are discussed.


2020 ◽  
Vol 11 (01) ◽  
pp. 190-199
Author(s):  
Stefanie Schild ◽  
Julian Gruendner ◽  
Christian Gulden ◽  
Hans-Ulrich Prokosch ◽  
Michael St Pierre ◽  
...  

Abstract Objective The aim of this study is to define data model requirements supporting the development of a digital cognitive aid (CA) for intraoperative crisis management in anesthesia, including medical emergency text modules (text elements) and branches or loops within emergency instructions (control structures) as well as their properties, data types, and value ranges. Methods The analysis process comprised three steps: reviewing the structure of paper-based CAs to identify common text elements and control structures, identifying requirements derived from content, design, and purpose of a digital CA, and validating requirements by loading exemplary emergency checklist data into the resulting prototype data model. Results The analysis of paper-based CAs identified 19 general text elements and two control structures. Aggregating these elements and analyzing the content, design and purpose of a digital CA revealed 20 relevant data model requirements. These included checklist tags to enable different search options, structured checklist action steps (items) in groups and subgroups, and additional information on each item. Checklist and Item were identified as two main classes of the prototype data model. A data object built according to this model was successfully integrated into a digital CA prototype. Conclusion To enable consistent design and interactivity with the content, presentation of critical medical information in a digital CA for crisis management requires a uniform structure. So far it has not been investigated which requirements need to be met by a data model for this purpose. The results of this study define the requirements and structure that enable the presentation of critical medical information. Further research is needed to develop a comprehensive data model for a digital CA for crisis management in anesthesia, including supplementation of requirements resulting from simulation studies and feasibility analyses regarding existing data models. This model may also be a useful template for developing data models for CAs in other medical domains.


2021 ◽  
pp. 1-25
Author(s):  
Yu-Chin Hsu ◽  
Ji-Liang Shiu

Under a Mundlak-type correlated random effect (CRE) specification, we first show that the average likelihood of a parametric nonlinear panel data model is the convolution of the conditional distribution of the model and the distribution of the unobserved heterogeneity. Hence, the distribution of the unobserved heterogeneity can be recovered by means of a Fourier transformation without imposing a distributional assumption on the CRE specification. We subsequently construct a semiparametric family of average likelihood functions of observables by combining the conditional distribution of the model and the recovered distribution of the unobserved heterogeneity, and show that the parameters in the nonlinear panel data model and in the CRE specification are identifiable. Based on the identification result, we propose a sieve maximum likelihood estimator. Compared with the conventional parametric CRE approaches, the advantage of our method is that it is not subject to misspecification on the distribution of the CRE. Furthermore, we show that the average partial effects are identifiable and extend our results to dynamic nonlinear panel data models.


2021 ◽  
Author(s):  
Matthias Held ◽  
Grit Laudel ◽  
Jochen Gläser

AbstractIn this paper we utilize an opportunity to construct ground truths for topics in the field of atomic, molecular and optical physics. Our research questions in this paper focus on (i) how to construct a ground truth for topics and (ii) the suitability of common algorithms applied to bibliometric networks to reconstruct these topics. We use the ground truths to test two data models (direct citation and bibliographic coupling) with two algorithms (the Leiden algorithm and the Infomap algorithm). Our results are discomforting: none of the four combinations leads to a consistent reconstruction of the ground truths. No combination of data model and algorithm simultaneously reconstructs all micro-level topics at any resolution level. Meso-level topics are not reconstructed at all. This suggests (a) that we are currently unable to predict which combination of data model, algorithm and parameter setting will adequately reconstruct which (types of) topics, and (b) that a combination of several data models, algorithms and parameter settings appears to be necessary to reconstruct all or most topics in a set of papers.


2019 ◽  
Vol 85 ◽  
pp. 07020
Author(s):  
Codrina Maria Ilie ◽  
Radu Constantin Gogu

The purpose of this paper is to present the state-of-art of groundwater geospatial information management, highlighting the relevant data model characteristics and technical implementation of the European Directive 2007/2/EC, also known as the INSPIRE Directive. The maturity of the groundwater geodata management systems is of crucial importance for any kind of activity, be it a research project or an operational service of monitoring, protection or exploitation activities. An ineffective and inadequate geodata management system can significantly increase costs or even overthrow the entire activity ([1-3]). Furthermore, following the technological advancement and the extended scientific and operational interdisciplinary connectivity at national and international scale, the interoperability characteristics are becoming increasingly important in the development of groundwater geospatial information management. From paper recordings to digital spreadsheets, from relational database to standardized data models, the manner in which the groundwater data was gathered, stored, processed and visualized has changed significantly over time. Aside from the clear technical progress, the design that captures the natural connections and dependencies between each groundwater feature and phenomena have also evolved. The second part of our paper address the variations that occurred when outlining the different groundwater geospatial information management models, differences that depict the complexity of hydrogeological data.


Sign in / Sign up

Export Citation Format

Share Document