scholarly journals Data Infrastructure for a Poisoning Registry with Designing Data Elements and a Minimum Data Set

2022 ◽  
Vol In Press (In Press) ◽  
Author(s):  
Azam Sabahi ◽  
Farkhondeh Asadi ◽  
Shahin Shadnia ◽  
Reza Rabiei ◽  
Azamossadat Hosseini

Background: The prevalence of poisoning is on the rise in Iran. A poisoning registry is a key source of information about poisoning patterns used for decision-making and healthcare provision, and a minimum dataset (MDS) is a prerequisite for developing a registry. Objectives: This study aimed to design a MDS for a poisoning registry. Methods: This applied study was conducted in 2021. A poisoning MDS was developed with a four-stage process: (1) conducting a systematic review of the Web of Science, Scopus, PubMed, and EMBASE, (2) examining poisoning-related websites and online forms, (3) classification of data elements in separate meetings with three toxicology specialists, and (4) validating data elements using the two-stage Delphi technique. A researcher-made checklist was employed for this purpose. The content validity of the checklist was examined based on the opinions of five health information management and medical informatics experts with respect to the topic of the study. Its test-retest reliability was also confirmed with the recruitment of 25 experts (r = 0.8). Results: Overall, 368 data elements were identified from the articles and forms, of which 358 were confirmed via the two-stage Delphi technique and classified into administrative (n = 88) and clinical data elements (n = 270). Conclusions: The creation of a poisoning registry requires identifying the information needs of healthcare centers, and an integrated and comprehensive framework should be developed to meet these needs. To this end, a MDS contains the essential data elements that form a framework for integrated and standard data collection.

Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 37
Author(s):  
Shixun Wang ◽  
Qiang Chen

Boosting of the ensemble learning model has made great progress, but most of the methods are Boosting the single mode. For this reason, based on the simple multiclass enhancement framework that uses local similarity as a weak learner, it is extended to multimodal multiclass enhancement Boosting. First, based on the local similarity as a weak learner, the loss function is used to find the basic loss, and the logarithmic data points are binarized. Then, we find the optimal local similarity and find the corresponding loss. Compared with the basic loss, the smaller one is the best so far. Second, the local similarity of the two points is calculated, and then the loss is calculated by the local similarity of the two points. Finally, the text and image are retrieved from each other, and the correct rate of text and image retrieval is obtained, respectively. The experimental results show that the multimodal multi-class enhancement framework with local similarity as the weak learner is evaluated on the standard data set and compared with other most advanced methods, showing the experience proficiency of this method.


2021 ◽  
Vol 7 ◽  
pp. 237796082098568
Author(s):  
Elizabeth M. Miller ◽  
Joanne E. Porter

Introduction Caring for someone at home requiring palliative care is an ominous task. Unless the current support systems are better utilised and improved to meet the needs of those carers, the demand for acute hospital admissions will increase as the Australian population ages. The aim of this review was to examine the needs of unpaid carers who were caring for adults receiving palliative care in their home in Australia. Methods: A systematic review of the literature was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Guidelines between 2008–2020. Results: Only Australian papers were selected due to the intent to understand carers’ needs in the Australian context and 17 papers made up the final data set. Four themes emerged: 1) Perceived factors influencing caregiving; 2) Perceived impact and responses to caregiving; 3) Communication and information needs; and 4) Perceptions of current palliative support services and barriers to uptake. Conclusion: Carers reported satisfaction and positive outcomes and also expressed feeling unprepared, unrecognised, stressed and exhausted.


2015 ◽  
Vol 15 (1) ◽  
pp. 253-272 ◽  
Author(s):  
M. R. Canagaratna ◽  
J. L. Jimenez ◽  
J. H. Kroll ◽  
Q. Chen ◽  
S. H. Kessler ◽  
...  

Abstract. Elemental compositions of organic aerosol (OA) particles provide useful constraints on OA sources, chemical evolution, and effects. The Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-ToF-AMS) is widely used to measure OA elemental composition. This study evaluates AMS measurements of atomic oxygen-to-carbon (O : C), hydrogen-to-carbon (H : C), and organic mass-to-organic carbon (OM : OC) ratios, and of carbon oxidation state (OS C) for a vastly expanded laboratory data set of multifunctional oxidized OA standards. For the expanded standard data set, the method introduced by Aiken et al. (2008), which uses experimentally measured ion intensities at all ions to determine elemental ratios (referred to here as "Aiken-Explicit"), reproduces known O : C and H : C ratio values within 20% (average absolute value of relative errors) and 12%, respectively. The more commonly used method, which uses empirically estimated H2O+ and CO+ ion intensities to avoid gas phase air interferences at these ions (referred to here as "Aiken-Ambient"), reproduces O : C and H : C of multifunctional oxidized species within 28 and 14% of known values. The values from the latter method are systematically biased low, however, with larger biases observed for alcohols and simple diacids. A detailed examination of the H2O+, CO+, and CO2+ fragments in the high-resolution mass spectra of the standard compounds indicates that the Aiken-Ambient method underestimates the CO+ and especially H2O+ produced from many oxidized species. Combined AMS–vacuum ultraviolet (VUV) ionization measurements indicate that these ions are produced by dehydration and decarboxylation on the AMS vaporizer (usually operated at 600 °C). Thermal decomposition is observed to be efficient at vaporizer temperatures down to 200 °C. These results are used together to develop an "Improved-Ambient" elemental analysis method for AMS spectra measured in air. The Improved-Ambient method uses specific ion fragments as markers to correct for molecular functionality-dependent systematic biases and reproduces known O : C (H : C) ratios of individual oxidized standards within 28% (13%) of the known molecular values. The error in Improved-Ambient O : C (H : C) values is smaller for theoretical standard mixtures of the oxidized organic standards, which are more representative of the complex mix of species present in ambient OA. For ambient OA, the Improved-Ambient method produces O : C (H : C) values that are 27% (11%) larger than previously published Aiken-Ambient values; a corresponding increase of 9% is observed for OM : OC values. These results imply that ambient OA has a higher relative oxygen content than previously estimated. The OS C values calculated for ambient OA by the two methods agree well, however (average relative difference of 0.06 OS C units). This indicates that OS C is a more robust metric of oxidation than O : C, likely since OS C is not affected by hydration or dehydration, either in the atmosphere or during analysis.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Farnoush Bayatmakou ◽  
Azadeh Mohebi ◽  
Abbas Ahmadi

Purpose Query-based summarization approaches might not be able to provide summaries compatible with the user’s information need, as they mostly rely on a limited source of information, usually represented as a single query by the user. This issue becomes even more challenging when dealing with scientific documents, as they contain more specific subject-related terms, while the user may not be able to express his/her specific information need in a query with limited terms. This study aims to propose an interactive multi-document text summarization approach that generates an eligible summary that is more compatible with the user’s information need. This approach allows the user to interactively specify the composition of a multi-document summary. Design/methodology/approach This approach exploits the user’s opinion in two stages. The initial query is refined by user-selected keywords/keyphrases and complete sentences extracted from the set of retrieved documents. It is followed by a novel method for sentence expansion using the genetic algorithm, and ranking the final set of sentences using the maximal marginal relevance method. Basically, for implementation, the Web of Science data set in the artificial intelligence (AI) category is considered. Findings The proposed approach receives feedback from the user in terms of favorable keywords and sentences. The feedback eventually improves the summary as the end. To assess the performance of the proposed system, this paper has asked 45 users who were graduate students in the field of AI to fill out a questionnaire. The quality of the final summary has been also evaluated from the user’s perspective and information redundancy. It has been investigated that the proposed approach leads to higher degrees of user satisfaction compared to the ones with no or only one step of the interaction. Originality/value The interactive summarization approach goes beyond the initial user’s query, while it includes the user’s preferred keywords/keyphrases and sentences through a systematic interaction. With respect to these interactions, the system gives the user a more clear idea of the information he/she is looking for and consequently adjusting the final result to the ultimate information need. Such interaction allows the summarization system to achieve a comprehensive understanding of the user’s information needs while expanding context-based knowledge and guiding the user toward his/her information journey.


1998 ◽  
Vol 87 (03) ◽  
pp. 139-140
Author(s):  
Aslak Steinsbekk ◽  
Martien Brands

AbstractIn order to compare data from different documentation projects of homoeopathic practice, there is a need for standardisation. A group which met at a European Committee of Homoeopathy (ECH) research meeting in London, made a proposal as a basis for discussion. The proposal is on the minimum data set which every study should include and which methods of effect measurement are most appropriate.


2012 ◽  
Author(s):  
A. Robert Weiß ◽  
Uwe Adomeit ◽  
Philippe Chevalier ◽  
Stéphane Landeau ◽  
Piet Bijl ◽  
...  

Author(s):  
Eugenia Rinaldi ◽  
Sylvia Thun

HiGHmed is a German Consortium where eight University Hospitals have agreed to the cross-institutional data exchange through novel medical informatics solutions. The HiGHmed Use Case Infection Control group has modelled a set of infection-related data in the openEHR format. In order to establish interoperability with the other German Consortia belonging to the same national initiative, we mapped the openEHR information to the Fast Healthcare Interoperability Resources (FHIR) format recommended within the initiative. FHIR enables fast exchange of data thanks to the discrete and independent data elements into which information is organized. Furthermore, to explore the possibility of maximizing analysis capabilities for our data set, we subsequently mapped the FHIR elements to the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM). The OMOP data model is designed to support the conduct of research to identify and evaluate associations between interventions and outcomes caused by these interventions. Mapping across standard allows to exploit their peculiarities while establishing and/or maintaining interoperability. This article provides an overview of our experience in mapping infection control related data across three different standards openEHR, FHIR and OMOP CDM.


2020 ◽  
Vol 38 (4) ◽  
pp. 821-842
Author(s):  
Haihua Chen ◽  
Yunhan Yang ◽  
Wei Lu ◽  
Jiangping Chen

Purpose Citation contexts have been found useful in many scenarios. However, existing context-based recommendations ignored the importance of diversity in reducing the redundant issues and thus cannot cover the broad range of user interests. To address this gap, the paper aims to propose a novelty task that can recommend a set of diverse citation contexts extracted from a list of citing articles. This will assist users in understanding how other scholars have cited an article and deciding which articles they should cite in their own writing. Design/methodology/approach This research combines three semantic distance algorithms and three diversification re-ranking algorithms for the diversifying recommendation based on the CiteSeerX data set and then evaluates the generated citation context lists by applying a user case study on 30 articles. Findings Results show that a diversification strategy that combined “word2vec” and “Integer Linear Programming” leads to better reading experience for participants than other diversification strategies, such as CiteSeerX using a list sorted by citation counts. Practical implications This diversifying recommendation task is valuable for developing better systems in information retrieval, automatic academic recommendations and summarization. Originality/value The originality of the research lies in the proposal of a novelty task that can recommend a diversification context list describing how other scholars cited an article, thereby making citing decisions easier. A novel mixed approach is explored to generate the most efficient diversifying strategy. Besides, rather than traditional information retrieval evaluation, a user evaluation framework is introduced to reflect user information needs more objectively.


1996 ◽  
Vol 42 (5) ◽  
pp. 725-731 ◽  
Author(s):  
B Schlain ◽  
H Frush ◽  
C Pennington ◽  
G Osikowicz ◽  
K Ford

Abstract A two-stage statistical procedure based on Dunnett's multiple comparison procedures with a control has been developed for detecting interassay carryover biases on the Abbott AxSYM(TM) System, a random- and continuous-access immunoanalyzer. With this procedure, every potential source of interassay carryover can be tested and estimated. In minimizing required sample sizes, the first stage is used primarily to detect and eliminate from further testing the assay reagent sources that do not cause carryover biases and the assay sources that cause very large carryover biases. Retested more extensively in the second testing stage are the cases where the data from the first testing stage are insufficient for judgment. An example data set from the Abbott AxSYM Free T4 assay is used to illustrate the methodology.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Marc Aubreville ◽  
Christof A. Bertram ◽  
Christian Marzahl ◽  
Corinne Gurtner ◽  
Martina Dettwiler ◽  
...  

Abstract Manual count of mitotic figures, which is determined in the tumor region with the highest mitotic activity, is a key parameter of most tumor grading schemes. It can be, however, strongly dependent on the area selection due to uneven mitotic figure distribution in the tumor section. We aimed to assess the question, how significantly the area selection could impact the mitotic count, which has a known high inter-rater disagreement. On a data set of 32 whole slide images of H&E-stained canine cutaneous mast cell tumor, fully annotated for mitotic figures, we asked eight veterinary pathologists (five board-certified, three in training) to select a field of interest for the mitotic count. To assess the potential difference on the mitotic count, we compared the mitotic count of the selected regions to the overall distribution on the slide. Additionally, we evaluated three deep learning-based methods for the assessment of highest mitotic density: In one approach, the model would directly try to predict the mitotic count for the presented image patches as a regression task. The second method aims at deriving a segmentation mask for mitotic figures, which is then used to obtain a mitotic density. Finally, we evaluated a two-stage object-detection pipeline based on state-of-the-art architectures to identify individual mitotic figures. We found that the predictions by all models were, on average, better than those of the experts. The two-stage object detector performed best and outperformed most of the human pathologists on the majority of tumor cases. The correlation between the predicted and the ground truth mitotic count was also best for this approach (0.963–0.979). Further, we found considerable differences in position selection between pathologists, which could partially explain the high variance that has been reported for the manual mitotic count. To achieve better inter-rater agreement, we propose to use a computer-based area selection for support of the pathologist in the manual mitotic count.


Sign in / Sign up

Export Citation Format

Share Document