When Models Meet Data

2020 ◽  
pp. 225-259
Keyword(s):  
Data ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 28 ◽  
Author(s):  
Andrea Sixto-Costoya ◽  
Rafael Aleixandre-Benavent ◽  
Rut Lucas-Domínguez ◽  
Antonio Vidal-Infer

(1) Background: The availability of research datasets can strengthen and facilitate research processes. This is specifically relevant in the emergency medicine field due to the importance of providing immediate care in critical situations as the very current Coronavirus (COVID-19) Pandemic is showing to the scientific community. This work aims to show which Emergency Medicine journals indexed in Journal Citation Reports (JCR) currently meet data sharing criteria. (2) Methods: This study analyzes the editorial policies regarding the data deposit of the journals in the emergency medicine category of the JCR and evaluates the Supplementary material of the articles published in these journals that have been deposited in the PubMed Central repository. (3) Results: It has been observed that 19 out of the 24 journals contained in the emergency medicine category of Journal Citation Reports are also located in PubMed Central (PMC), yielding a total of 5983 articles. Out of these, only 9.4% of the articles contain supplemental material. Although second quartile journals of JCR emergency medicine category have quantitatively more articles in PMC, the main journals involved in the deposit of supplemental material belong to the first quartile, of which the most used format in the articles is pdf, followed by text documents. (4) Conclusion: This study reveals that data sharing remains an incipient practice in the emergency medicine field, as there are still barriers between researchers to participate in data sharing. Therefore, it is necessary to promote dynamics to improve this practice both qualitatively (the quality and format of datasets) and quantitatively (the quantity of datasets in absolute terms) in research.


2012 ◽  
Vol 433-440 ◽  
pp. 3425-3432
Author(s):  
Xiao Chuan Chen ◽  
Jin Fang Liu ◽  
Qing Li

This paper mainly focused on the life cycle cost collection of manufacturing product based on web and data organization methodology according to business requirements. The general features of manufacturing product life cycle cost were analyzed at first while cost collection objects and cost-related product features were chosen. After that, the software system was developed using .NET and C# web programming technique picking up cost data from credible websites on Internet. Then, life cycle cost data warehouse model was designed via features of cost collection object ending up with the establishment of data warehouse system acquiring cost data as data source. Basic data cleaning and data validation function was developed to meet data quality demand referring to data integrality, data authenticity and data time effectiveness. Finally, an exemplification system of family cars was set up successfully assembling 300 thousand plus records about 3500 car models in last 26 months. This system was proved to work without human manipulation over hours visiting 10 thousand web pages and collecting cost data. This proved the software was valid.


Author(s):  
Robert Peruzzi

This case was about LCD video screens intended to become components of medical equipment requiring an ultra-wide viewing angle. The seller was a wholesaler of various types of video screens from multiple manufacturers. The buyer was a distributor of multiple electrical components for various industries. The OEM, not involved in the case, was a manufacturer of medical instruments and equipment. Claiming that multiple units did not meet the requirements specified in the purchase agreement, the OEM refused a shipment of 1,000 LCD video screens. The buyer had already paid the seller, who refused to take back the shipment and issue a refund or credit. As a result, the buyer sued seller, and the author investigated and submitted expert opinions regarding the following questions: Did performance differ between examined samples? Did each sample meet data sheet specification for viewing angle? And was each sample adequate for its intended application as advertised in the datasheet (that is, for industrial settings requiring ultra-wide viewing angle)?


2005 ◽  
Vol 44 (02) ◽  
pp. 193-197 ◽  
Author(s):  
P. Singleton ◽  
J. Milan ◽  
J. MacKay ◽  
D. Detmer ◽  
A. Rector ◽  
...  

Summary Objectives: CLEF is an MRC sponsored project in the E-Science programme that aims to establish methodologies and a technical infrastructure for the next generation of integrated clinical and bioscience research. Methods: The heart of the CLEF approach to this challenge is to design and develop a pseudonymised repository of histories of cancer patients that can be accessed by researchers. Robust mechanisms and policies have been developed to ensure that patient privacy and confidentiality are preserved while delivering a repository of such medically rich information for the purposes of scientific research. Results: This paper summarises the overall approach adopted by CLEF to meet data protection requirements, including the data flows, pseudonymisation measures and additional monitoring policies that are currently being developed. Conclusion: Once evaluated, it is hoped that the CLEF approach can serve as a model for other distributed electronic health record repositories to be accessed for research.


2018 ◽  
Vol 4 ◽  
pp. 20 ◽  
Author(s):  
Massimo Salvatores ◽  
Giuseppe Palmiotti

Nuclear data users’ requirements for uncertainty data started already in the seventies, when several fast reactor projects did use extensively “statistical data adjustments” to meet data improvement for core and shielding design. However, it was only ∼20–30 years later that a major effort started to produce scientifically based covariance data and in particular since ∼2005. Most work has been done since then with spectacular achievements and enhanced understanding both of the uncertainty evaluation process and of the data utilization in V&V. This paper summarizes some key developments and still open challenges.


Bioanalysis ◽  
2019 ◽  
Vol 11 (13) ◽  
pp. 1227-1231 ◽  
Author(s):  
Cecilia Arfvidsson ◽  
David Van Bedaf ◽  
Mira Doig ◽  
Susanne Globig ◽  
Magnus Knutsson ◽  
...  

In this conference report, we summarize the main findings and messages from a workshop on ‘Data Integrity’. The workshop was held at the 11th European Bioanalysis Forum Open (EBF) Symposium in Barcelona (21–23 November 2018), in collaboration with the Medicines and Health products Regulatory Agency to provide insight and understanding of regulatory data integrity expectations. The workshop highlighted the importance of engaging with software developers to address the gap between industry’s data integrity needs and current system software capabilities. Delegates were also made aware of the importance of implementing additional procedural controls to mitigate the risk associated with using systems that do not fully meet data integrity requirements.


2017 ◽  
Vol 24 (4) ◽  
pp. 1470-1487 ◽  
Author(s):  
Julia Pongratz ◽  
Han Dolman ◽  
Axel Don ◽  
Karl-Heinz Erb ◽  
Richard Fuchs ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document