Toward unstructured mesh algebra and query language

Author(s):  
Alireza Rezaei Mahdiraji
Kerntechnik ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. 262-266
Author(s):  
M. Lovecký ◽  
J. Závorka ◽  
J. Vimpel

2019 ◽  
Author(s):  
Joshua Bradly Spencer ◽  
Jennifer Louise Alwin
Keyword(s):  

2020 ◽  
Vol 4 (s1) ◽  
pp. 50-50
Author(s):  
Robert Edward Freundlich ◽  
Gen Li ◽  
Jonathan P Wanderer ◽  
Frederic T Billings ◽  
Henry Domenico ◽  
...  

OBJECTIVES/GOALS: We modeled risk of reintubation within 48 hours of cardiac surgery using variables available in the electronic health record (EHR). This model will guide recruitment for a prospective, pragmatic clinical trial entirely embedded within the EHR among those at high risk of reintubation. METHODS/STUDY POPULATION: All adult patients admitted to the cardiac intensive care unit following cardiac surgery involving thoracotomy or sternotomy were eligible for inclusion. Data were obtained from operational and analytical databases integrated into the Epic EHR, as well as institutional and departmental-derived data warehouses, using structured query language. Variables were screened for inclusion in the model based on clinical relevance, availability in the EHR as structured data, and likelihood of timely documentation during routine clinical care, in the hopes of obtaining a maximally-pragmatic model. RESULTS/ANTICIPATED RESULTS: A total of 2325 patients met inclusion criteria between November 2, 2017 and November 2, 2019. Of these patients, 68.4% were male. Median age was 63.0. The primary outcome of reintubation occurred in 112/2325 (4.8%) of patients within 48 hours and 177/2325 (7.6%) at any point in the subsequent hospital encounter. Univariate screening and iterative model development revealed numerous strong candidate predictors (ANOVA plot, figure 1), resulting in a model with acceptable calibration (calibration plot, figure 2), c = 0.666. DISCUSSION/SIGNIFICANCE OF IMPACT: Reintubation is common after cardiac surgery. Risk factors are available in the EHR. We are integrating this model into the EHR to support real-time risk estimation and to recruit and randomize high-risk patients into a clinical trial comparing post-extubation high flow nasal cannula with usual care. CONFLICT OF INTEREST DESCRIPTION: REF has received grant funding and consulting fees from Medtronic for research on inpatient monitoring.


Atmosphere ◽  
2018 ◽  
Vol 9 (11) ◽  
pp. 444 ◽  
Author(s):  
Jinxi Li ◽  
Jie Zheng ◽  
Jiang Zhu ◽  
Fangxin Fang ◽  
Christopher. Pain ◽  
...  

Advection errors are common in basic terrain-following (TF) coordinates. Numerous methods, including the hybrid TF coordinate and smoothing vertical layers, have been proposed to reduce the advection errors. Advection errors are affected by the directions of velocity fields and the complexity of the terrain. In this study, an unstructured adaptive mesh together with the discontinuous Galerkin finite element method is employed to reduce advection errors over steep terrains. To test the capability of adaptive meshes, five two-dimensional (2D) idealized tests are conducted. Then, the results of adaptive meshes are compared with those of cut-cell and TF meshes. The results show that using adaptive meshes reduces the advection errors by one to two orders of magnitude compared to the cut-cell and TF meshes regardless of variations in velocity directions or terrain complexity. Furthermore, adaptive meshes can reduce the advection errors when the tracer moves tangentially along the terrain surface and allows the terrain to be represented without incurring in severe dispersion. Finally, the computational cost is analyzed. To achieve a given tagging criterion level, the adaptive mesh requires fewer nodes, smaller minimum mesh sizes, less runtime and lower proportion between the node numbers used for resolving the tracer and each wavelength than cut-cell and TF meshes, thus reducing the computational costs.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 149
Author(s):  
Petros Zervoudakis ◽  
Haridimos Kondylakis ◽  
Nicolas Spyratos ◽  
Dimitris Plexousakis

HIFUN is a high-level query language for expressing analytic queries of big datasets, offering a clear separation between the conceptual layer, where analytic queries are defined independently of the nature and location of data, and the physical layer, where queries are evaluated. In this paper, we present a methodology based on the HIFUN language, and the corresponding algorithms for the incremental evaluation of continuous queries. In essence, our approach is able to process the most recent data batch by exploiting already computed information, without requiring the evaluation of the query over the complete dataset. We present the generic algorithm which we translated to both SQL and MapReduce using SPARK; it implements various query rewriting methods. We demonstrate the effectiveness of our approach in temrs of query answering efficiency. Finally, we show that by exploiting the formal query rewriting methods of HIFUN, we can further reduce the computational cost, adding another layer of query optimization to our implementation.


1997 ◽  
Vol 26 (3) ◽  
pp. 4-11 ◽  
Author(s):  
Mary Fernandez ◽  
Daniela Florescu ◽  
Alon Levy ◽  
Dan Suciu

2021 ◽  
Vol 11 (5) ◽  
pp. 2405
Author(s):  
Yuxiang Sun ◽  
Tianyi Zhao ◽  
Seulgi Yoon ◽  
Yongju Lee

Semantic Web has recently gained traction with the use of Linked Open Data (LOD) on the Web. Although numerous state-of-the-art methodologies, standards, and technologies are applicable to the LOD cloud, many issues persist. Because the LOD cloud is based on graph-based resource description framework (RDF) triples and the SPARQL query language, we cannot directly adopt traditional techniques employed for database management systems or distributed computing systems. This paper addresses how the LOD cloud can be efficiently organized, retrieved, and evaluated. We propose a novel hybrid approach that combines the index and live exploration approaches for improved LOD join query performance. Using a two-step index structure combining a disk-based 3D R*-tree with the extended multidimensional histogram and flash memory-based k-d trees, we can efficiently discover interlinked data distributed across multiple resources. Because this method rapidly prunes numerous false hits, the performance of join query processing is remarkably improved. We also propose a hot-cold segment identification algorithm to identify regions of high interest. The proposed method is compared with existing popular methods on real RDF datasets. Results indicate that our method outperforms the existing methods because it can quickly obtain target results by reducing unnecessary data scanning and reduce the amount of main memory required to load filtering results.


Sign in / Sign up

Export Citation Format

Share Document