Theorising Learning in Science Through Integrating Multimodal Representations

Author(s):  
Vaughan Prain ◽  
Russell Tytler
Cortex ◽  
2017 ◽  
Vol 89 ◽  
pp. 85-97 ◽  
Author(s):  
Stefano Anzellotti ◽  
Alfonso Caramazza

2020 ◽  
Vol 21 (S5) ◽  
Author(s):  
Jaehyun Lee ◽  
Doheon Lee ◽  
Kwang Hyung Lee

Abstract Biological contextual information helps understand various phenomena occurring in the biological systems consisting of complex molecular relations. The construction of context-specific relational resources vastly relies on laborious manual extraction from unstructured literature. In this paper, we propose COMMODAR, a machine learning-based literature mining framework for context-specific molecular relations using multimodal representations. The main idea of COMMODAR is the feature augmentation by the cooperation of multimodal representations for relation extraction. We leveraged biomedical domain knowledge as well as canonical linguistic information for more comprehensive representations of textual sources. The models based on multiple modalities outperformed those solely based on the linguistic modality. We applied COMMODAR to the 14 million PubMed abstracts and extracted 9214 context-specific molecular relations. All corpora, extracted data, evaluation results, and the implementation code are downloadable at https://github.com/jae-hyun-lee/commodar. Ccs concepts • Computing methodologies~Information extraction • Computing methodologies~Neural networks • Applied computing~Biological networks.


Sign in / Sign up

Export Citation Format

Share Document