data restructuring
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
С.І. Хмелевський ◽  
І.М. Тупиця ◽  
Kaciм Aббуд Махдi ◽  
О.П. Мусієнко ◽  
М.В. Пархоменко ◽  
...  

The existing methods of data restructuring, implemented in modern algorithms for coding data of information resources, are investigated. The problematic aspects of using methods of external data restructuring are analyzed from the standpoint of ensuring the appropriate level of reliability. The main ones among them are the possibility of losing key information during the reconstruction of the original message, which can lead to the adoption of erroneous and untimely decisions by the relevant security sector authorities to overcome crisis situations. Requirements for information resources used in the interests of security sector bodies in the context of the need for a prompt response to overcoming crisis situations are being formed. In order to increase the efficiency of data coding of an information resource from the position of a compact representation in conditions of ensuring an appropriate level of quality, a method of external restructuring is being developed, which additionally allows to eliminate psycho-visual, statistical and structural redundancy of the message. External restructuring of data means the formation of a new message alphabet (color palette for an image). A new approach to restructuring these information resources is proposed, the essence of which is to determine the significance of individual elements in the initial message by a quantitative indicator, and then adjust the power of the original message, which allows creating more favorable conditions for further coding.


Author(s):  
Sayuri R. Yamashita ◽  
Lucia A. Noblat ◽  
Ivan C. Machado

Introduction: The increasing volume of pharmacovigilance data froom reports of Adverse Drug Reaction (ADR) indicates the need for a database (DB) to manage its electronic records. Objective: Thus, the objective of the present study is to prepare a prototype computing environment that will permit data recording, storage and recovery aimed at generating information and creating an effective database within the Pharmacovigilance Unit of the Professor Edgard Santos Teaching Hospital in Salvador, Bahia, Brazil. Methods: This descriptive study managed by pharmacists and development systems professionals whose object of study was based on preexisting electronic spread sheets used to store data on adverse drug reactions since 2000. The work consisted of three principal steps: the normalization of the data, relationships between the data collected, and database modeling with implementation of the information system. Results: This restructuring allows a database to be consolidated quickly and consistently, with reliable data duly completed and analyzed. Conclusion: Thus, the HUPES Pharmacovigilance Information System (SIFAVI) was modeled, which integrates into a web application na easy mechanism for storing and recovering the data stored in the database. This also permits the data on adverse drug reactions to be categorized and crosschecked, enabling more precise inferences to be made, thus rendering this practice simpler for users and improving the culture of notifying and validating adverse drug reactions.


Author(s):  
Sidharth Kumar ◽  
Venkatram Vishwanath ◽  
Philip Carns ◽  
Joshua A. Levine ◽  
Robert Latham ◽  
...  

2000 ◽  
Vol 56 (3) ◽  
pp. 250-278 ◽  
Author(s):  
Kalervo Järvelin ◽  
Peter Ingwersen ◽  
Timo Niemi

This article presents a novel user‐oriented interface for generalised informetric analysis and demonstrates how informetric calculations can easily and declaratively be specified through advanced data modelling techniques. The interface is declarative and at a high level. Therefore it is easy to use, flexible and extensible. It enables end users to perform basic informetric ad hoc calculations easily and often with much less effort than in contemporary online retrieval systems. It also provides several fruitful generalisations of typical informetric measurements like impact factors. These are based on substituting traditional foci of analysis, for instance journals, by other object types, such as authors, organisations or countries. In the interface, bibliographic data are modelled as complex objects (non‐first normal form relations) and terminological and citation networks involving transitive relationships are modelled as binary relations for deductive processing. The interface is flexible, because it makes it easy to switch focus between various object types for informetric calculations, e.g. from authors to institutions. Moreover, it is demonstrated that all informetric data can easily be broken down by criteria that foster advanced analysis, e.g. by years or content‐bearing attributes. Such modelling allows flexible data aggregation along many dimensions. These salient features emerge from the query interface‘s general data restructuring and aggregation capabilities combined with transitive processing capabilities. The features are illustrated by means of sample queries and results in the article.


Sign in / Sign up

Export Citation Format

Share Document