scholarly journals A data model for enhanced data comparability across multiple organizations

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Patrick Obilikwu ◽  
Emeka Ogbuju

Abstract Organizations may be related in terms of similar operational procedures, management, and supervisory agencies coordinating their operations. Supervisory agencies may be governmental or non-governmental but, in all cases, they perform oversight functions over the activities of the organizations under their control. Multiple organizations that are related in terms of oversight functions by their supervisory agencies, may differ significantly in terms of their geographical locations, aims, and objectives. To harmonize these differences such that comparative analysis will be meaningful, data about the operations of multiple organizations under one control or management can be cultivated, using a uniform format. In this format, data is easily harvested and the ease with which it is used for cross-population analysis, referred to as data comparability is enhanced. The current practice, whereby organizations under one control maintain their data in independent databases, specific to an enterprise application, greatly reduces data comparability and makes cross-population analysis a herculean task. In this paper, the collocation data model is formulated as consisting of big data technologies beyond data mining techniques and used to reduce the heterogeneity inherent in databases maintained independently across multiple organizations. The collocation data model is thus presented as capable of enhancing data comparability across multiple organizations. The model was used to cultivate the assessment scores of students in some schools for some period and used to rank the schools. The model permits data comparability across several geographical scales among which are: national, regional and global scales, where harvested data form the basis for generating analytics for insights, hindsight, and foresight about organizational problems and strategies.

2020 ◽  
Vol 8 (1) ◽  
Author(s):  
Kevin P. Seitz ◽  
Ellen S. Caldwell ◽  
Catherine L. Hough

Abstract Background Acute respiratory distress syndrome (ARDS) and volume overload are associated with increased hospital mortality. Evidence supports conservative fluid management in ARDS, but whether current practice reflects the implementation of that evidence has not been described. This study reports the variability in contemporary fluid management for ICU patients with ARDS. We compared routine care to trial protocols and analyzed whether more conservative management with diuretic medications in contemporary, usual care is associated with outcomes. Methods We performed a retrospective cohort study in nine ICUs at two academic hospitals during 2016 and 2017. We included 234 adult patients with ARDS in an ICU at least 3 days after meeting moderate-severe ARDS criteria (PaO2:FIO2 ≤ 150). The primary exposure was any diuretic use in 48 to 72 h after meeting ARDS criteria. The primary outcome was hospital mortality. Unadjusted statistical analyses and multivariable logistic regression were used. Results In 48–72 h after meeting ARDS criteria, 116 patients (50%) received a diuretic. In-hospital mortality was lower in the group that received diuretics than in the group that did not (14% vs 25%; p = 0.025). At ARDS onset, both groups had similar Sequential Organ Failure Assessment scores and ICU fluid balances. During the first 48 h after ARDS, the diuretic group received less crystalloid fluid than the no diuretic group (median [inter-quartile range]: 1.2 L [0.2–2.8] vs 2.4 L [1.2-5.0]; p < 0.001), but both groups received more fluid from medications and nutrition than from crystalloid. At 48 h, the prevalence of volume overload (ICU fluid balance >10% of body weight) in each group was 16% and 25%(p = 0.09), respectively. During 48–72 h after ARDS, the overall prevalence of shock was 44% and similar across both groups. Central venous pressure was recorded in only 18% of patients. Adjusting for confounders, early diuretic use was independently associated with lower hospital mortality (AOR 0.46, 95%CI [0.22, 0.96]). Conclusions In this sample of ARDS patients, volume overload was common, and early diuretic use was independently associated with lower hospital mortality. These findings support the importance of fluid management in ARDS and suggest opportunities for further study and implementation of conservative fluid strategies into usual care.


2011 ◽  
Vol 340 ◽  
pp. 109-115
Author(s):  
Jian Guo Yu ◽  
Mei Lin Feng ◽  
Peng Peng Huang ◽  
De Chang Xu

For the development about Manufacturing Execution System of mould making, the difficulty in manufacturing execution management is pointed out by combining mould making process with current management regulations of mould making enterprises. Goal of the system is analyzed and its main function is designed. System architecture is established based on WEB platform and B/S mode, The physical data model is designed by using PowerDesigner software and implementation methods of some critical functions for manufacturing execution system are developed based on JSP. Through practical enterprise application, it is proved that the system immensely enhance the manufacturing execution management level for mould making enterprises.


2014 ◽  
Vol 513-517 ◽  
pp. 1294-1298 ◽  
Author(s):  
Si Si Shen ◽  
Ai Xia Ding

Exchanging and sharing information are the basic request for the Digital Campus. To deal with the current problems of information sharing and integration, the content and framework of the universal data interchange platform are introduced in terms of the categories of processes and the layers of information exchange. And a data interchange model was developed to elaborate the data exchange between different departments on campus. Four key technologies, such as XML data model, XML data hybrid storage, data standard construction and data import & export module are presented so as to define the implementation and exchange paths. The current practice of implement the exchange standards and future study are also discussed.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
T. Hassan ◽  
M. Al-Alawi ◽  
S. H. Chotirmall ◽  
N. G. McElvaney

Pleural fluid analysis yields important diagnostic information in pleural effusions in combination with clinical history, examination, and radiology. For more than 30 years, the initial and most pragmatic step in this process is to determine whether the fluid is a transudate or an exudate. Light's criteria remain the most robust in separating the transudate-exudate classification which dictates further investigations or management. Recent studies have led to the evaluation and implementation of a number of additional fluid analyses that may improve the diagnostic utility of this method. This paper discusses the current practice and future direction of pleural fluid analysis in determining the aetiology of a pleural effusion. While this has been performed for a few decades, a number of other pleural characteristics are becoming available suggesting that this diagnostic tool is indeed a work in progress.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0242453
Author(s):  
Vanessa Cedeno-Mieles ◽  
Zhihao Hu ◽  
Yihui Ren ◽  
Xinwei Deng ◽  
Noshir Contractor ◽  
...  

There is large interest in networked social science experiments for understanding human behavior at-scale. Significant effort is required to perform data analytics on experimental outputs and for computational modeling of custom experiments. Moreover, experiments and modeling are often performed in a cycle, enabling iterative experimental refinement and data modeling to uncover interesting insights and to generate/refute hypotheses about social behaviors. The current practice for social analysts is to develop tailor-made computer programs and analytical scripts for experiments and modeling. This often leads to inefficiencies and duplication of effort. In this work, we propose a pipeline framework to take a significant step towards overcoming these challenges. Our contribution is to describe the design and implementation of a software system to automate many of the steps involved in analyzing social science experimental data, building models to capture the behavior of human subjects, and providing data to test hypotheses. The proposed pipeline framework consists of formal models, formal algorithms, and theoretical models as the basis for the design and implementation. We propose a formal data model, such that if an experiment can be described in terms of this model, then our pipeline software can be used to analyze data efficiently. The merits of the proposed pipeline framework is elaborated by several case studies of networked social science experiments.


Author(s):  
A. Sorokine ◽  
R. N Stewart

Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc.) and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets). Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.


2020 ◽  
pp. 204388692093590
Author(s):  
Chu-Yeong Lim ◽  
Arif Perdana ◽  
Shin-Ren Wong

This case involves actual data obtained from an interview with a partner, a manager and two associates at a firm located in Singapore, Alvarino (pseudonym). The firm is part of a global network of accountancy and business advisory firms. The network comprises of more than 100 independently owned and managed firms that straddle more than 100 geographical locations across the world. The case illustrates issues that Alvarino experienced in scheduling their staff for audit advisory engagement. As a service oriented and cost-conscious business, workforce scheduling is essential to help Alvarino’s management optimise its workforce allocation. The objective of this case is to create a data model that maps user and data requirements to optimise Alvarino’s workforce-scheduling processes.


Author(s):  
Hakan Ancin

This paper presents methods for performing detailed quantitative automated three dimensional (3-D) analysis of cell populations in thick tissue sections while preserving the relative 3-D locations of cells. Specifically, the method disambiguates overlapping clusters of cells, and accurately measures the volume, 3-D location, and shape parameters for each cell. Finally, the entire population of cells is analyzed to detect patterns and groupings with respect to various combinations of cell properties. All of the above is accomplished with zero subjective bias.In this method, a laser-scanning confocal light microscope (LSCM) is used to collect optical sections through the entire thickness (100 - 500μm) of fluorescently-labelled tissue slices. The acquired stack of optical slices is first subjected to axial deblurring using the expectation maximization (EM) algorithm. The resulting isotropic 3-D image is segmented using a spatially-adaptive Poisson based image segmentation algorithm with region-dependent smoothing parameters. Extracting the voxels that were labelled as "foreground" into an active voxel data structure results in a large data reduction.


2008 ◽  
Vol 18 (1) ◽  
pp. 31-40 ◽  
Author(s):  
David J. Zajac

Abstract The purpose of this opinion article is to review the impact of the principles and technology of speech science on clinical practice in the area of craniofacial disorders. Current practice relative to (a) speech aerodynamic assessment, (b) computer-assisted single-word speech intelligibility testing, and (c) behavioral management of hypernasal resonance are reviewed. Future directions and/or refinement of each area are also identified. It is suggested that both challenging and rewarding times are in store for clinical researchers in craniofacial disorders.


Sign in / Sign up

Export Citation Format

Share Document