Classification of cognitive systems dedicated to data sharing

Author(s):  
Lidia D. Ogiela ◽  
Marek R. Ogiela
2019 ◽  
Vol 21 (3) ◽  
pp. 936-945 ◽  
Author(s):  
Charles Vesteghem ◽  
Rasmus Froberg Brøndum ◽  
Mads Sønderkær ◽  
Mia Sommer ◽  
Alexander Schmitz ◽  
...  

AbstractCompelling research has recently shown that cancer is so heterogeneous that single research centres cannot produce enough data to fit prognostic and predictive models of sufficient accuracy. Data sharing in precision oncology is therefore of utmost importance. The Findable, Accessible, Interoperable and Reusable (FAIR) Data Principles have been developed to define good practices in data sharing. Motivated by the ambition of applying the FAIR Data Principles to our own clinical precision oncology implementations and research, we have performed a systematic literature review of potentially relevant initiatives. For clinical data, we suggest using the Genomic Data Commons model as a reference as it provides a field-tested and well-documented solution. Regarding classification of diagnosis, morphology and topography and drugs, we chose to follow the World Health Organization standards, i.e. ICD10, ICD-O-3 and Anatomical Therapeutic Chemical classifications, respectively. For the bioinformatics pipeline, the Genome Analysis ToolKit Best Practices using Docker containers offer a coherent solution and have therefore been selected. Regarding the naming of variants, we follow the Human Genome Variation Society's standard. For the IT infrastructure, we have built a centralized solution to participate in data sharing through federated solutions such as the Beacon Networks.


2005 ◽  
Vol 6 (3) ◽  
pp. 213-220 ◽  
Author(s):  
Jianchang Qi ◽  
Vadim Shapiro

Geometric data interoperability is critical in industrial applications where geometric data are transferred (translated) among multiple modeling systems for data sharing and reuse. A big obstacle in data translation lies in that geometric data are usually imprecise and geometric algorithm precisions vary from system to system. In the absence of common formal principles, both industry and academia embraced ad hoc solutions, costing billions of dollars in lost time and productivity. This paper explains how the problem of interoperability, and data translation in particular, may be formulated and studied in terms of a recently developed theory of ε-solidity. Furthermore, a systematic classification of problems in data translation shows that in most cases ε-solids can be maintained without expensive and arbitrary geometric repairs.


Stroke ◽  
2016 ◽  
Vol 47 (suppl_1) ◽  
Author(s):  
Joy R Esterlitz ◽  
Jeffrey L Saver ◽  
Steven Warach ◽  
Thomas G Brott ◽  
Ralph L Sacco ◽  
...  

Introduction: In order to increase the efficiency and effectiveness of neurovascular clinical research studies, increase data quality, facilitate data sharing, help educate new clinical investigators and reduce study start-up time, the National Institute of Neurological Disorders and Stroke (NINDS) convened a Working Group (WG) that developed Version 1.0 (published 2010) Stroke-specific Common Data Elements (CDEs). Since their initial publication, intervening advances in science and initial experience with the CDEs identified a need to update them and refine guidance on their deployment. Hypothesis/Objective: The NINDS has updated guidance on uniform data structures for use in cerebrovascular research in epidemiology, clinical trials and imaging studies in order to advance the prevention, acute treatment and recovery from cerebrovascular disease. Methods: The NINDS convened experts in research and data element design drawing strongly from investigators in the NIH StrokeNet and other NINDS clinical research projects. Results: Stroke CDE leadership developed a revised process for classifying Stroke CDEs among the four hierarchical categories of Core, Supplemental - Highly Recommended, Supplemental and Exploratory. Due to the heterogeneity of stroke conditions and study types, the classification of Supplemental - Highly Recommended was used for study type (clinical trial or observational), disease type (e.g., ischemic stroke, intracerebral hemorrhage, subarachnoid hemorrhage) and disease phase (primary prevention, acute, recovery and secondary prevention). Conclusion: The second iteration of NINDS CDE recommendations for neurovascular disease is an important step towards more efficient study start-up time and improved data sharing. The updated CDEs were released on the NINDS CDE website in May 2015. The information at this meeting will include examples of how the Stroke CDEs may be used by a research study, an explanation of the new CDE classifications, and examples of navigating and selecting CDEs from the NINDS CDE website. Support: This project was funded by HHSN271201200034C.


2011 ◽  
Vol 25 ◽  
pp. 104-147 ◽  
Author(s):  
Galit W. Sassoon

Classification of entities into categories can be determined based on a rule – a single criterion or relatively few criteria combined with logical operations like ‘and’ or ‘or’. Alternatively, classification can be based on similarity to prototypical examples, i.e. an overall degree of match to prototypical values on multiple dimensions. Two cognitive systems are reported in the literature to underlie processing by rules vs. similarity. This paper presents a novel thesis according to which adjectives and nouns trigger processing by the rule vs. similarity systems, respectively. The paper defends the thesis that nouns are conceptually gradable and multidimensional, but, unlike adjectives, their dimensions are integrated through similarity operations, like weighted sums, to yield an overall degree of match to ideal values on multiple dimensions. By contrast, adjectives are associated with single dimensions, or several dimensions bound by logical operations, such as ‘and’ and ‘or’. In accordance, nouns are predicted to differ from adjectives semantically, developmentally, and processing-wise. Similarity-based dimension integration is implicit – processing is automatic, fast, and beyond speaker awareness – whereas logical, rule-based dimension integration is explicit, and is acquired late. The paper highlights a number of links between findings reported in the literature about rule- vs. similarity-based categorization and corresponding structural, distributional, neural and developmental findings about adjectives and nouns. These links suggest that the rule vs. similarity (RS) hypothesis for the adjective-noun distinction should be studied more directly in the future.


Author(s):  
Adam Csapo ◽  
Barna Resko ◽  
Morten Lind ◽  
Peter Baranyi ◽  
Domonkos Tikk

The computerized modeling of cognitive visual information has been a research field of great interest in the past several decades. The research field is interesting not only from a biological perspective, but also from an engineering point of view when systems are developed that aim to achieve similar goals as biological cognitive systems. This paper introduces a general framework for the extraction and systematic storage of low-level visual features. The applicability of the framework is investigated in both unstructured and highly structured environments. In a first experiment, a linear categorization algorithm originally developed for the classification of text documents is used to classify natural images taken from the Caltech 101 database. In a second experiment, the framework is used to provide an automatically guided vehicle with obstacle detection and auto-positioning functionalities in highly structured environments. Results demonstrate that the model is highly applicable in structured environments, and also shows promising results in certain cases when used in unstructured environments.


2020 ◽  
Vol 9 (3) ◽  
pp. 1260-1267
Author(s):  
Agus Eko Minarno ◽  
Fauzi Dwi Setiawan Sumadi ◽  
Hardianto Wibowo ◽  
Yuda Munarko

This study is proposed to compare which are the better method to classify Batik image between K-Nearest neighbor and Support Vector Machine using minimum features of GLCM. The proposed steps are started by converting image to grayscale and extracting colour feature using four features of GLCM. The features include Energy, Entropy, Contras, Correlation and 0o, 45o, 90o, and 135o. The classifier features consist of 16 features in total. In the experimental result, there exist comparison of previous works regarding the classification KNN and SVM using multi texton histogram (MTH). The experiments are carried out in the form of calculation of accuracy with data sharing and cross-validation scenario. From the test results, the average accuracy for KNN is 78.3% and 92.3% for SVM in the cross-validation scenario. The scenario for the highest accuracy of data sharing is at 70% for KNN and at 100% for SVM. Thus, it is apparent that the application of the GLCM and SVM method for extracting and classifying batik motifs has been effective and better than previous work.


Sign in / Sign up

Export Citation Format

Share Document