mapping rules
Recently Published Documents


TOTAL DOCUMENTS

91
(FIVE YEARS 13)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Carina Nina Vorisek ◽  
Sophie Anne Ines Klopfenstein ◽  
Julian Sass ◽  
Moritz Lehne ◽  
Carsten Oliver Schmidt ◽  
...  

Studies investigating the suitability of SNOMED CT in COVID-19 datasets are still scarce. The purpose of this study was to evaluate the suitability of SNOMED CT for structured searches of COVID-19 studies, using the German Corona Consensus Dataset (GECCO) as example. Suitability of the international standard SNOMED CT was measured with the scoring system ISO/TS 21564, and intercoder reliability of two independent mapping specialists was evaluated. The resulting analysis showed that the majority of data items had either a complete or partial equivalent in SNOMED CT (complete equivalent: 141 items; partial equivalent: 63 items; no equivalent: 1 item). Intercoder reliability was moderate, possibly due to non-establishment of mapping rules and high percentage (74%) of different but similar concepts among the 86 non-equal chosen concepts. The study shows that SNOMED CT can be utilized for COVID-19 cohort browsing. However, further studies investigating mapping rules and further international terminologies are necessary.


2020 ◽  
Vol 59 (S 02) ◽  
pp. e64-e78
Author(s):  
Antje Wulff ◽  
Marcel Mast ◽  
Marcus Hassler ◽  
Sara Montag ◽  
Michael Marschollek ◽  
...  

Abstract Background Merging disparate and heterogeneous datasets from clinical routine in a standardized and semantically enriched format to enable a multiple use of data also means incorporating unstructured data such as medical free texts. Although the extraction of structured data from texts, known as natural language processing (NLP), has been researched at least for the English language extensively, it is not enough to get a structured output in any format. NLP techniques need to be used together with clinical information standards such as openEHR to be able to reuse and exchange still unstructured data sensibly. Objectives The aim of the study is to automatically extract crucial information from medical free texts and to transform this unstructured clinical data into a standardized and structured representation by designing and implementing an exemplary pipeline for the processing of pediatric medical histories. Methods We constructed a pipeline that allows reusing medical free texts such as pediatric medical histories in a structured and standardized way by (1) selecting and modeling appropriate openEHR archetypes as standard clinical information models, (2) defining a German dictionary with crucial text markers serving as expert knowledge base for a NLP pipeline, and (3) creating mapping rules between the NLP output and the archetypes. The approach was evaluated in a first pilot study by using 50 manually annotated medical histories from the pediatric intensive care unit of the Hannover Medical School. Results We successfully reused 24 existing international archetypes to represent the most crucial elements of unstructured pediatric medical histories in a standardized form. The self-developed NLP pipeline was constructed by defining 3.055 text marker entries, 132 text events, 66 regular expressions, and a text corpus consisting of 776 entries for automatic correction of spelling mistakes. A total of 123 mapping rules were implemented to transform the extracted snippets to an openEHR-based representation to be able to store them together with other structured data in an existing openEHR-based data repository. In the first evaluation, the NLP pipeline yielded 97% precision and 94% recall. Conclusion The use of NLP and openEHR archetypes was demonstrated as a viable approach for extracting and representing important information from pediatric medical histories in a structured and semantically enriched format. We designed a promising approach with potential to be generalized, and implemented a prototype that is extensible and reusable for other use cases concerning German medical free texts. In a long term, this will harness unstructured clinical data for further research purposes such as the design of clinical decision support systems. Together with structured data already integrated in openEHR-based representations, we aim at developing an interoperable openEHR-based application that is capable of automatically assessing a patient's risk status based on the patient's medical history at time of admission.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5242
Author(s):  
Mingyuan Cao ◽  
Lihua Tian ◽  
Chen Li

Recently, many video steganography algorithms based on the intra-prediction mode (IPM) have been adaptive steganography algorithms. These algorithms usually focus on the research about mapping rules and distortion functions while ignoring the fact that adaptive steganography may not be suitable for video steganography based on the intra-prediction mode; this is because the adaptive steganography algorithm must first calculate the loss of all cover before the first secret message is embedded. However, the modification of an IPM may change the pixel values of the current block and adjacent blocks, which will lead to the change of the loss of the following blocks. In order to avoid this problem, a new secure video steganography based on a novel embedding strategy is proposed in this paper. Video steganography is combined with video encoding. Firstly, the frame is encoded by an original encoder and all the relevant information is saved. The candidate block is found according to the relevant information and mapping rules. Then every qualified block is analyzed, and a one-bit message is embedded during intra-prediction encoding. At last, if the IPM of this block is changed, the values of the residual are modified in order to keep the optimality of the modified IPM. Experimental results indicate that our algorithm has good security performance and little impact on video quality.


Author(s):  
David Chaves-Fraga ◽  
Freddy Priyatna ◽  
Ahmad Alobaid ◽  
Oscar Corcho

In the last decade, REST has become the most common approach to provide web services, yet it was not originally designed to handle typical modern applications (e.g. mobile apps). GraphQL was proposed to reduce the number of queries and data exchanged in comparison with REST. Since its release in 2015, it has gained momentum as an alternative approach to REST. However, generating and maintaining GraphQL resolvers is not a simple task. First, a domain expert has to analyze a dataset, design the corresponding GraphQL schema and map the dataset to the schema. Then, a software engineer (e.g. GraphQL developer) implements the corresponding GraphQL resolvers in a specific programming language. In this paper, we present an approach to exploit the information from mappings rules (relation between target and source schema) and generate a GraphQL server. These mapping rules construct a virtual knowledge graph which is accessed by the generated GraphQL resolvers. These resolvers translate the input GraphQL queries into the queries supported by the underlying dataset. Domain experts or software developers may benefit from our approach: a domain expert does not need to involve software developers to implement the resolvers, and software developers can generate the initial version of the resolvers to be implemented. We implemented our approach in the Morph-GraphQL framework and evaluated it using the LinGBM benchmark.


Sign in / Sign up

Export Citation Format

Share Document