scholarly journals Large-Scale Information Extraction from Textual Definitions through Deep Syntactic and Semantic Analysis

Author(s):  
Claudio Delli Bovi ◽  
Luca Telesca ◽  
Roberto Navigli

We present DefIE, an approach to large-scale Information Extraction (IE) based on a syntactic-semantic analysis of textual definitions. Given a large corpus of definitions we leverage syntactic dependencies to reduce data sparsity, then disambiguate the arguments and content words of the relation strings, and finally exploit the resulting information to organize the acquired relations hierarchically. The output of DefIE is a high-quality knowledge base consisting of several million automatically acquired semantic relations.

Author(s):  
Bonan Min ◽  
Shuming Shi ◽  
Ralph Grishman ◽  
Chin-Yew Lin

The Web brings an open-ended set of semantic relations. Discovering the significant types is very challenging. Unsupervised algorithms have been developed to extract relations from a corpus without knowing the relation types in advance, but most rely on tagging arguments of predefined types. One recently reported system is able to jointly extract relations and their argument semantic classes, taking a set of relation instances extracted by an open IE (Information Extraction) algorithm as input. However, it cannot handle polysemy of relation phrases and fails to group many similar (“synonymous”) relation instances because of the sparseness of features. In this paper, the authors present a novel unsupervised algorithm that provides a more general treatment of the polysemy and synonymy problems. The algorithm incorporates various knowledge sources which they will show to be very effective for unsupervised relation extraction. Moreover, it explicitly disambiguates polysemous relation phrases and groups synonymous ones. While maintaining approximately the same precision, the algorithm achieves significant improvement on recall compared to the previous method. It is also very efficient. Experiments on a real-world dataset show that it can handle 14.7 million relation instances and extract a very large set of relations from the Web.


Names ◽  
2021 ◽  
Vol 69 (3) ◽  
pp. 16-27
Author(s):  
Rogelio Nazar ◽  
Irene Renau ◽  
Nicolas Acosta ◽  
Hernan Robledo ◽  
Maha Soliman ◽  
...  

This paper presents a series of methods for automatically determining the gender of proper names, based on their co-occurrence with words and grammatical features in a large corpus. Although the results obtained were for Spanish given names, the method presented here can be easily replicated and used for names in other languages. Most methods reported in the literature use pre-existing lists of first names that require costly manual processing and tend to become quickly outdated. Instead, we propose using corpora. Doing so offers the possibility of obtaining real and up-to-date name-gender links. To test the effectiveness of our method, we explored various machine-learning methods as well as another method based on simple frequency of co-occurrence. The latter produced the best results: 93% precision and 88% recall on a database of ca. 10,000 mixed names. Our method can be applied to a variety of natural language processing tasks such as information extraction, machine translation, anaphora resolution or large-scale delivery or email correspondence, among others.


2018 ◽  
Vol 14 (3) ◽  
pp. 37-48
Author(s):  
D S Botov ◽  
J D Klenin ◽  
I E Nikolaev

In this article we discuss the approach to information extraction (IE) using neural language models. We provide a detailed overview of modern IE methods: both supervised and unsupervised. The proposed method allows to achieve a high quality solution to the problem of analyzing the relevant labor market requirements without the need for a time-consuming labelling procedure. In this experiment, professional standards act as a knowledge base of the labor domain. Comparing the descriptions of work actions and requirements from professional standards with the elements of job listings, we extract four entity types. The approach is based on the classification of vector representations of texts, generated using various neural language models: averaged word2vec, SIF-weighted averaged word2vec, TF-IDF-weighted averaged word2vec, paragraph2vec. Experimentally, the best quality was shown by the averaged word2vec (CBOW) model.


2002 ◽  
Vol 8 (2-3) ◽  
pp. 209-233 ◽  
Author(s):  
OLIVIER FERRET ◽  
BRIGITTE GRAU

Topic analysis is important for many applications dealing with texts, such as text summarization or information extraction. However, it can be done with great precision only if it relies on structured knowledge, which is difficult to produce on a large scale. In this paper, we propose using bootstrapping to solve this problem: a first topic analysis based on a weakly structured source of knowledge, a collocation network, is used for learning explicit topic representations that then support a more precise and reliable topic analysis.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-52
Author(s):  
Lorenzo De Stefani ◽  
Erisa Terolli ◽  
Eli Upfal

We introduce Tiered Sampling , a novel technique for estimating the count of sparse motifs in massive graphs whose edges are observed in a stream. Our technique requires only a single pass on the data and uses a memory of fixed size M , which can be magnitudes smaller than the number of edges. Our methods address the challenging task of counting sparse motifs—sub-graph patterns—that have a low probability of appearing in a sample of M edges in the graph, which is the maximum amount of data available to the algorithms in each step. To obtain an unbiased and low variance estimate of the count, we partition the available memory into tiers (layers) of reservoir samples. While the base layer is a standard reservoir sample of edges, other layers are reservoir samples of sub-structures of the desired motif. By storing more frequent sub-structures of the motif, we increase the probability of detecting an occurrence of the sparse motif we are counting, thus decreasing the variance and error of the estimate. While we focus on the designing and analysis of algorithms for counting 4-cliques, we present a method which allows generalizing Tiered Sampling to obtain high-quality estimates for the number of occurrence of any sub-graph of interest, while reducing the analysis effort due to specific properties of the pattern of interest. We present a complete analytical analysis and extensive experimental evaluation of our proposed method using both synthetic and real-world data. Our results demonstrate the advantage of our method in obtaining high-quality approximations for the number of 4 and 5-cliques for large graphs using a very limited amount of memory, significantly outperforming the single edge sample approach for counting sparse motifs in large scale graphs.


Geosciences ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 174
Author(s):  
Marco Emanuele Discenza ◽  
Carlo Esposito ◽  
Goro Komatsu ◽  
Enrico Miccadei

The availability of high-quality surface data acquired by recent Mars missions and the development of increasingly accurate methods for analysis have made it possible to identify, describe, and analyze many geological and geomorphological processes previously unknown or unstudied on Mars. Among these, the slow and large-scale slope deformational phenomena, generally known as Deep-Seated Gravitational Slope Deformations (DSGSDs), are of particular interest. Since the early 2000s, several studies were conducted in order to identify and analyze Martian large-scale gravitational processes. Similar to what happens on Earth, these phenomena apparently occur in diverse morpho-structural conditions on Mars. Nevertheless, the difficulty of directly studying geological, structural, and geomorphological characteristics of the planet makes the analysis of these phenomena particularly complex, leaving numerous questions to be answered. This paper reports a synthesis of all the known studies conducted on large-scale deformational processes on Mars to date, in order to provide a complete and exhaustive picture of the phenomena. After the synthesis of the literature studies, the specific characteristics of the phenomena are analyzed, and the remaining main open issued are described.


2020 ◽  
Vol 8 (Suppl 3) ◽  
pp. A62-A62
Author(s):  
Dattatreya Mellacheruvu ◽  
Rachel Pyke ◽  
Charles Abbott ◽  
Nick Phillips ◽  
Sejal Desai ◽  
...  

BackgroundAccurately identified neoantigens can be effective therapeutic agents in both adjuvant and neoadjuvant settings. A key challenge for neoantigen discovery has been the availability of accurate prediction models for MHC peptide presentation. We have shown previously that our proprietary model based on (i) large-scale, in-house mono-allelic data, (ii) custom features that model antigen processing, and (iii) advanced machine learning algorithms has strong performance. We have extended upon our work by systematically integrating large quantities of high-quality, publicly available data, implementing new modelling algorithms, and rigorously testing our models. These extensions lead to substantial improvements in performance and generalizability. Our algorithm, named Systematic HLA Epitope Ranking Pan Algorithm (SHERPA™), is integrated into the ImmunoID NeXT Platform®, our immuno-genomics and transcriptomics platform specifically designed to enable the development of immunotherapies.MethodsIn-house immunopeptidomic data was generated using stably transfected HLA-null K562 cells lines that express a single HLA allele of interest, followed by immunoprecipitation using W6/32 antibody and LC-MS/MS. Public immunopeptidomics data was downloaded from repositories such as MassIVE and processed uniformly using in-house pipelines to generate peptide lists filtered at 1% false discovery rate. Other metrics (features) were either extracted from source data or generated internally by re-processing samples utilizing the ImmunoID NeXT Platform.ResultsWe have generated large-scale and high-quality immunopeptidomics data by using approximately 60 mono-allelic cell lines that unambiguously assign peptides to their presenting alleles to create our primary models. Briefly, our primary ‘binding’ algorithm models MHC-peptide binding using peptide and binding pockets while our primary ‘presentation’ model uses additional features to model antigen processing and presentation. Both primary models have significantly higher precision across all recall values in multiple test data sets, including mono-allelic cell lines and multi-allelic tissue samples. To further improve the performance of our model, we expanded the diversity of our training set using high-quality, publicly available mono-allelic immunopeptidomics data. Furthermore, multi-allelic data was integrated by resolving peptide-to-allele mappings using our primary models. We then trained a new model using the expanded training data and a new composite machine learning architecture. The resulting secondary model further improves performance and generalizability across several tissue samples.ConclusionsImproving technologies for neoantigen discovery is critical for many therapeutic applications, including personalized neoantigen vaccines, and neoantigen-based biomarkers for immunotherapies. Our new and improved algorithm (SHERPA) has significantly higher performance compared to a state-of-the-art public algorithm and furthers this objective.


Toxins ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 420
Author(s):  
Yi Ma ◽  
Liu Cui ◽  
Meng Wang ◽  
Qiuli Sun ◽  
Kaisheng Liu ◽  
...  

Bacterial ghosts (BGs) are empty cell envelopes possessing native extracellular structures without a cytoplasm and genetic materials. BGs are proposed to have significant prospects in biomedical research as vaccines or delivery carriers. The applications of BGs are often limited by inefficient bacterial lysis and a low yield. To solve these problems, we compared the lysis efficiency of the wild-type protein E (EW) from phage ΦX174 and the screened mutant protein E (EM) in the Escherichia coli BL21(DE3) strain. The results show that the lysis efficiency mediated by protein EM was improved. The implementation of the pLysS plasmid allowed nearly 100% lysis efficiency, with a high initial cell density as high as OD600 = 2.0, which was higher compared to the commonly used BG preparation method. The results of Western blot analysis and immunofluorescence indicate that the expression level of protein EM was significantly higher than that of the non-pLysS plasmid. High-quality BGs were observed by SEM and TEM. To verify the applicability of this method in other bacteria, the T7 RNA polymerase expression system was successfully constructed in Salmonella enterica (S. Enterica, SE). A pET vector containing EM and pLysS were introduced to obtain high-quality SE ghosts which could provide efficient protection for humans and animals. This paper describes a novel and commonly used method to produce high-quality BGs on a large scale for the first time.


2013 ◽  
Vol 3 (1) ◽  
pp. 77-99 ◽  
Author(s):  
Aletta G. Dorst ◽  
W.Gudrun Reijnierse ◽  
Gemma Venhuizen

The manual annotation of large corpora is time-consuming and brings about issues of consistency. This paper aims to demonstrate how general rules for determining basic meanings can be formulated in large-scale projects involving multiple analysts applying MIP(VU) to authentic data. Three sets of problematic lexical units — chemical processes, colours, and sharp objects — are discussed in relation to the question of how the basic meaning of a lexical unit can be determined when human and non-human senses compete as candidates for the basic meaning; these analyses can therefore be considered a detailed case study of problems encountered during step 3.b. of MIP(VU). The analyses show how these problematic cases were tackled in a large corpus clean-up project in order to streamline the annotations and ensure a greater consistency of the corpus. In addition, this paper will point out how the formulation of general identification rules and guidelines could provide a first step towards the automatic detection of linguistic metaphors in natural discourse.


Sign in / Sign up

Export Citation Format

Share Document