scholarly journals RCA as a Data Transforming Method: A Comparison with Propositionalisation

Author(s):  
Xavier Dolques ◽  
Kartick Chandra Mondal ◽  
Agnés Braud ◽  
Marianne Huchard ◽  
Florence Le Ber
Keyword(s):  
2021 ◽  
Vol 121 (9) ◽  
pp. A46
Author(s):  
H. Pinsky ◽  
B. Jordan ◽  
C. Anselmo ◽  
S. Kaufman ◽  
J. Gibbons ◽  
...  

2017 ◽  
Vol 24 (6) ◽  
pp. 1192-1203 ◽  
Author(s):  
Andrew Goldstein ◽  
Eric Venker ◽  
Chunhua Weng

Abstract Objective Critical appraisal of clinical evidence promises to help prevent, detect, and address flaws related to study importance, ethics, validity, applicability, and reporting. These research issues are of growing concern. The purpose of this scoping review is to survey the current literature on evidence appraisal to develop a conceptual framework and an informatics research agenda. Methods We conducted an iterative literature search of Medline for discussion or research on the critical appraisal of clinical evidence. After title and abstract review, 121 articles were included in the analysis. We performed qualitative thematic analysis to describe the evidence appraisal architecture and its issues and opportunities. From this analysis, we derived a conceptual framework and an informatics research agenda. Results We identified 68 themes in 10 categories. This analysis revealed that the practice of evidence appraisal is quite common but is rarely subjected to documentation, organization, validation, integration, or uptake. This is related to underdeveloped tools, scant incentives, and insufficient acquisition of appraisal data and transformation of the data into usable knowledge. Discussion The gaps in acquiring appraisal data, transforming the data into actionable information and knowledge, and ensuring its dissemination and adoption can be addressed with proven informatics approaches. Conclusions Evidence appraisal faces several challenges, but implementing an informatics research agenda would likely help realize the potential of evidence appraisal for improving the rigor and value of clinical evidence.


2021 ◽  
Vol 13 (3) ◽  
pp. 1-15
Author(s):  
Rada Chirkova ◽  
Jon Doyle ◽  
Juan Reutter

Assessing and improving the quality of data are fundamental challenges in Big-Data applications. These challenges have given rise to numerous solutions targeting transformation, integration, and cleaning of data. However, while schema design, data cleaning, and data migration are nowadays reasonably well understood in isolation, not much attention has been given to the interplay between standalone tools in these areas. In this article, we focus on the problem of determining whether the available data-transforming procedures can be used together to bring about the desired quality characteristics of the data in business or analytics processes. For example, to help an organization avoid building a data-quality solution from scratch when facing a new analytics task, we ask whether the data quality can be improved by reusing the tools that are already available, and if so, which tools to apply, and in which order, all without presuming knowledge of the internals of the tools, which may be external or proprietary. Toward addressing this problem, we conduct a formal study in which individual data cleaning, data migration, or other data-transforming tools are abstracted as black-box procedures with only some of the properties exposed, such as their applicability requirements, the parts of the data that the procedure modifies, and the conditions that the data satisfy once the procedure has been applied. As a proof of concept, we provide foundational results on sequential applications of procedures abstracted in this way, to achieve prespecified data-quality objectives, for the use case of relational data and for procedures described by standard relational constraints. We show that, while reasoning in this framework may be computationally infeasible in general, there exist well-behaved cases in which these foundational results can be applied in practice for achieving desired data-quality results on Big Data.


2021 ◽  
Vol 10 (9) ◽  
pp. 619
Author(s):  
João Monteiro ◽  
Bruno Martins ◽  
Miguel Costa ◽  
João M. Pires

Datasets collecting demographic and socio-economic statistics are widely available. Still, the data are often only released for highly aggregated geospatial areas, which can mask important local hotspots. When conducting spatial analysis, one often needs to disaggregate the source data, transforming the statistics reported for a set of source zones into values for a set of target zones, with a different geometry and a higher spatial resolution. This article reports on a novel dasymetric disaggregation method that uses encoder–decoder convolutional neural networks, similar to those adopted in image segmentation tasks, to combine different types of ancillary data. Model training constitutes a particular challenge. This is due to the fact that disaggregation tasks are ill-posed and do not entail the direct use of supervision signals in the form of training instances mapping low-resolution to high-resolution counts. We propose to address this problem through self-training. Our method iteratively refines initial estimates produced by disaggregation heuristics and training models with the estimates from previous iterations together with relevant regularization strategies. We conducted experiments related to the disaggregation of different variables collected for Continental Portugal into a raster grid with a resolution of 200 m. Results show that the proposed approach outperforms common alternative methods, including approaches that use other types of regression models to infer the dasymetric weights.


Author(s):  
Prismahardi Aji Riyantoko ◽  
Tresna Maulana Fahrudin ◽  
Kartika Maulida Hindrayani ◽  
Amri Muhaimin ◽  
Trimono

Time series is one of method to forecasting the data. The ACEA company has competition with opened the data in the Water Availability and uses the data to forecast. The dataset namely, Aquifers-Petrignano in Italy in water resources field has five parameters e.g. rainfall, temperature, depth to groundwater, drainage volume, and river hydrometry. In our research will be forecast the depth to groundwater data using univariate and multivariate approach of time series using Prophet Method. Prophet method is one of library which develop by Facebook team. We also use the other approach to making the data clean, or the data ready to forecast. We use handle missing data, transforming, differencing, decomposition time series, determine lag, stationary approach, and Augmented Dickey-Fuller (ADF). The all approach will be uses to make sure that the data not appearing the problem while we tried to forecast. In the other describe, we already get the results using univariate and multivariate Prophet method. The multivariate approach has presented the value of MAE 0.82 and RMSE 0.99, it’s better than while we forecast using univariate Prophet.


Sign in / Sign up

Export Citation Format

Share Document