BRES: EXTRACTING MULTICLASS BIOMEDICAL RELATIONS WITH SEMANTIC NETWORK

2013 ◽  
Vol 25 (01) ◽  
pp. 1350009
Author(s):  
Lejun Gong ◽  
Ronggen Yang ◽  
Xiao Sun

With an overwhelming amount of published biomedical research, the underlying biomedical knowledge is expanding at an exponential rate. This expansion makes it very difficult to find interested genetics knowledge. And therefore, there is an urgent need for developing text mining approaches to discover new knowledge from publications. This paper presents a text mining approach for multiclass biomedical relations based on predicate argument structure (PAS) and shallow parsing. The approach can mine explicit biomedical relations with semantic enrichment, and visualize relations with semantic network. It first identifies noun phrases based on shallow parsing, and then filters arguments from noun phrases via biomedical ontology dictionary. We have implemented BRES, a text mining system, based on our proposed approach. Our results obtained 67.7% F-measure, 62.5% precision and 73.8% recall for the test dataset. This also shows our proposed approach is promising for developing biomedical text mining technology. Highlights: • Mining multiclass biomedical relations; • Representing biomedical relations with semantic enrichment; • Visualizing relations by semantic network; • Extracting direct and indirect biomedical relations.

Author(s):  
Diane Massam

This book presents a detailed descriptive and theoretical examination of predicate-argument structure in Niuean, a Polynesian language within the Oceanic branch of the Austronesian family, spoken mainly on the Pacific island of Niue and in New Zealand. Niuean has VSO word order and an ergative case-marking system, both of which raise questions for a subject-predicate view of sentence structure. Working within a broadly Minimalist framework, this volume develops an analysis in which syntactic arguments are not merged locally to their thematic sources, but instead are merged high, above an inverted extended predicate which serves syntactically as the Niuean verb, later undergoing movement into the left periphery of the clause. The thematically lowest argument merges as an absolutive inner subject, with higher arguments merging as applicatives. The proposal relates Niuean word order and ergativity to its isolating morphology, by equating the absence of inflection with the absence of IP in Niuean, which impacts many aspects of its grammar. As well as developing a novel analysis of clause and argument structure, word order, ergative case, and theta role assignment, the volume argues for an expanded understanding of subjecthood. Throughout the volume, many other topics are also treated, such as noun incorporation, word formation, the parallel internal structure of predicates and arguments, null arguments, displacement typology, the role of determiners, and the structure of the left periphery.


2017 ◽  
Vol 139 (11) ◽  
Author(s):  
Feng Shi ◽  
Liuqing Chen ◽  
Ji Han ◽  
Peter Childs

With the advent of the big-data era, massive information stored in electronic and digital forms on the internet become valuable resources for knowledge discovery in engineering design. Traditional document retrieval method based on document indexing focuses on retrieving individual documents related to the query, but is incapable of discovering the various associations between individual knowledge concepts. Ontology-based technologies, which can extract the inherent relationships between concepts by using advanced text mining tools, can be applied to improve design information retrieval in the large-scale unstructured textual data environment. However, few of the public available ontology database stands on a design and engineering perspective to establish the relations between knowledge concepts. This paper develops a “WordNet” focusing on design and engineering associations by integrating the text mining approaches to construct an unsupervised learning ontology network. Subsequent probability and velocity network analysis are applied with different statistical behaviors to evaluate the correlation degree between concepts for design information retrieval. The validation results show that the probability and velocity analysis on our constructed ontology network can help recognize the high related complex design and engineering associations between elements. Finally, an engineering design case study demonstrates the use of our constructed semantic network in real-world project for design relations retrieval.


1997 ◽  
Vol 20 (1) ◽  
pp. 31-62
Author(s):  
Nancy L. Underwood

This paper presents an overview of the first broad coverage grammatical description of Danish in a Typed Feature Structure (TFS) based unification formalism inspired by HPSG. These linguistic specifications encompass phenomena within inflectional morphology, phrase structure and predicate argument structure, and have been developed with a view to implementation. The emphasis on implementability and re-usability of the specifications has led to the adoption of a rather leaner formal framework than that underlying HPSG. However, the paper shows that the adoption of such a framework does not lead to a loss of expressibility, but in fact enables certain phenomena, such as the interface between morphology and syntax and local discontinuities, to be treated in a simple and elegant fashion.


2021 ◽  
pp. 1-48
Author(s):  
Zuchao Li ◽  
Hai Zhao ◽  
Shexia He ◽  
Jiaxun Cai

Abstract Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for both dependency and both monolingual and multilingual settings. This paper intends to quantify the importance of syntactic information for neural SRL in the deep learning framework. We introduce three typical SRL frameworks (baselines), sequence-based, tree-based, and graph-based, which are accompanied by two categories of exploiting syntactic information: syntax pruningbased and syntax feature-based. Experiments are conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages available, and results show that neural SRL models can still benefit from syntactic information under certain conditions. Furthermore, we show the quantitative significance of syntax to neural SRL models together with a thorough empirical survey using existing models.


2010 ◽  
Vol 17 (1) ◽  
pp. 141-159
Author(s):  
Mamoru Komachi ◽  
Ryu Iida ◽  
Kentaro Inui ◽  
Yuji Matsumoto

2020 ◽  
Vol 2 (3) ◽  
pp. p43
Author(s):  
Longxing Wei

There have been numerous studies of first Language (L1) transfer in second Language (L2) learning. Various models have been proposed to explore the sources of language transfer and have also caused many controversies over the nature of language transfer and its effects on interlanguage. Different from most previous studies remaining at a surface level of observation, this study proposes an abstract approach, which is abstract because it goes beyond any superficial observation and description by exploring the nature and activity of the bilingual mental lexicon in L2 learning. This approach adopts the Bilingual Lemma Activation Model (BLAM) (Wei, 2006a, 2006b) and tests its crucial assumptions and claims: The bilingual mental lexicon does not simply contain lexemes but abstract entries, called “lemmas”, about them; lemmas in the bilingual mental lexicon are language-specific; language-specific lemmas in the bilingual mental lexicon are in contact in L2 learning, lemmas underlying L1 abstract lexical structure may replace those underlying L2 abstract lexical structure. Lemmas in the bilingual mental lexicon are about three levels of abstract lexical structure: lexical-conceptual structure, predicate-argument structure, and morphological realization patterns. The typical instances of L1 lemma transfer in L2 learning are discussed and explained in support of the BLAM.


Sign in / Sign up

Export Citation Format

Share Document