Robustness beyond shallowness: incremental deep parsing

2002 ◽  
Vol 8 (2-3) ◽  
pp. 121-144 ◽  
Author(s):  
S. AÏT-MOKHTAR ◽  
J.-P. CHANOD ◽  
C. ROUX

Robustness is a key issue for natural language processing in general and parsing in particular, and many approaches have been explored in the last decade for the design of robust parsing systems. Among those approaches is shallow or partial parsing, which produces minimal and incomplete syntactic structures, often in an incremental way. We argue that with a systematic incremental methodology one can go beyond shallow parsing to deeper language analysis, while preserving robustness. We describe a generic system based on such a methodology and designed for building robust analyzers that tackle deeper linguistic phenomena than those traditionally handled by the now widespread shallow parsers. The rule formalism allows the recognition of n-ary linguistic relations between words or constituents on the basis of global or local structural, topological and/or lexical conditions. It offers the advantage of accepting various types of inputs, ranging from raw to chunked or constituent-marked texts, so for instance it can be used to process existing annotated corpora, or to perform a deeper analysis on the output of an existing shallow parser. It has been successfully used to build a deep functional dependency parser, as well as for the task of co-reference resolution, in a modular way.

Author(s):  
TIAN-SHUN YAO

With the word-based theory of natural language processing, a word-based Chinese language understanding system has been developed. In the light of psychological language analysis and the features of the Chinese language, this theory of natural language processing is presented with the description of the computer programs based on it. The heart of the system is to define a Total Information Dictionary and the World Knowledge Source used in the system. The purpose of this research is to develop a system which can understand not only Chinese sentences but also the whole text.


Author(s):  
S. S. Vasiliev ◽  
D. M. Korobkin ◽  
S. A. Fomenkov

To solve the problem of information support for the synthesis of new technical solutions, a method of extracting structured data from an array of Russian-language patents is presented. The key features of the invention, such as the structural elements of the technical object and the relationships between them, are considered as information support. The data source addresses the main claim of the invention in the device patent. The unit of extraction is the semantic structure Subject-Action-Object (SAO), which semantically describes the constructive elements. The extraction method is based on shallow parsing and claim segmentation, taking into account the specifics of writing patent texts. Often the excessive length of the claim sentence and the specificity of the patent language make it difficult to efficiently use off-the-shelf tools for data extracting. All processing steps include: segmentation of the claim sentences; extraction of primary SAO structures; construction of the graph of the construct elements f the invention; integration of the data into the domain ontology. This article deals with the first two stages. Segmentation is carried out according to a number of heuristic rules, and several natural language processing tools are used to reduce analysis errors. The primary SAO elements are extracted considering the valences of the predefined semantic group of verbs, as well as information about the type of processed segment. The result of the work is the organization of the domain ontology, which can be used to find alternative designs for nodes in a technical object. In the second part of the article, an algorithm for constructing a graph of structural elements of a separate technical object, an assessment of the effectiveness of the system, as well as ontology organization and the result are considered.


2020 ◽  
Vol 46 (2) ◽  
pp. 1059-1083
Author(s):  
Irena Srdanović

This paper presents the two approaches used in creating specialized web corpora of Croatian tourism in Japanese for their usage in building a specialized learners’ dictionary. Both approaches use the WebBootCat technology (Baroni et al. 2006, Kilgarriff et al. 2014) to automatically create specialized web corpora. The first approach creates the corpora from the selected seed words most relevant to the topic. The second approach specifies a number of web pages that cover tourism-oriented information on specified regions, cities, and sites in Croatia available in Japanese, which are then used for web corpora creation inside the Sketch Engine platform. Both approaches provide specialized web corpora small in size, but quite useful for lexical profiling in the specific field of tourism. In the process of dictionary creation, the second approach has proven to be especially useful for the selection of lexical items, while both approaches have proven to be highly useful for the exploration and selection of authentic examples from the corpora. The research exposes some shortcomings in Japanese language processing, such as errors in the lemmatization of some culturally specific terms and indicates the need to refine existing language processing tools in Japanese. The Japanese-Croatian bilingual learner’s dictionary (Srdanović 2018) is currently in the pilot phase and is being used and built by learners and teachers through the open-source dictionary platform Lexonomy (Mechura 2017). In addition to the fact that work on the bilingual dictionary is useful as a means for training students in language analysis and description using modern technologies (e.g. corpora, corpus query systems, dictionary editing platform), the dictionary is also important in educating new personnel capable of working in tourism using the Japanese language, which is strongly needed. In future, the same approach could be used for creating specialized corpora and dictionaries for Japanese and other language pairs.


2020 ◽  
Author(s):  
Joshua Conrad Jackson ◽  
Joseph Watts ◽  
Johann-Mattis List ◽  
Ryan Drabble ◽  
Kristen Lindquist

Humans have been using language for thousands of years, but psychologists seldom consider what natural language can tell us about the mind. Here we propose that language offers a unique window into human cognition. After briefly summarizing the legacy of language analyses in psychological science, we show how methodological advances have made these analyses more feasible and insightful than ever before. In particular, we describe how two forms of language analysis—comparative linguistics and natural language processing—are already contributing to how we understand emotion, creativity, and religion, and overcoming methodological obstacles related to statistical power and culturally diverse samples. We summarize resources for learning both of these methods, and highlight the best way to combine language analysis techniques with behavioral paradigms. Applying language analysis to large-scale and cross-cultural datasets promises to provide major breakthroughs in psychological science.


Author(s):  
Yan Huang ◽  
Akira Murakami ◽  
Theodora Alexopoulou ◽  
Anna Korhonen

Abstract As large-scale learner corpora become increasingly available, it is vital that natural language processing (NLP) technology is developed to provide rich linguistic annotations necessary for second language (L2) research. We present a system for automatically analyzing subcategorization frames (SCFs) for learner English. SCFs link lexis with morphosyntax, shedding light on the interplay between lexical and structural information in learner language. Meanwhile, SCFs are crucial to the study of a wide range of phenomena including individual verbs, verb classes and varying syntactic structures. To illustrate the usefulness of our system for learner corpus research and second language acquisition (SLA), we investigate how L2 learners diversify their use of SCFs in text and how this diversity changes with L2 proficiency.


2015 ◽  
Vol 3 ◽  
pp. 359-373 ◽  
Author(s):  
Wolfgang Seeker ◽  
Özlem Çetinoğlu

Space-delimited words in Turkish and Hebrew text can be further segmented into meaningful units, but syntactic and semantic context is necessary to predict segmentation. At the same time, predicting correct syntactic structures relies on correct segmentation. We present a graph-based lattice dependency parser that operates on morphological lattices to represent different segmentations and morphological analyses for a given input sentence. The lattice parser predicts a dependency tree over a path in the lattice and thus solves the joint task of segmentation, morphological analysis, and syntactic parsing. We conduct experiments on the Turkish and the Hebrew treebank and show that the joint model outperforms three state-of-the-art pipeline systems on both data sets. Our work corroborates findings from constituency lattice parsing for Hebrew and presents the first results for full lattice parsing on Turkish.


2017 ◽  
Vol 6 (2) ◽  
pp. 26
Author(s):  
Hulin Ren

The connectionist approach to language processing is popular in second language (L2) study in recent years. The paper is to investigate the connectionist approach of Chinese learners’ individual differences in the comprehension of certain ambiguous English sentences. Comprehension accuracy and grammaticality judgment are carried out with three groups with different background of language experience, namely, well-experienced English natives (group 1), well-experienced non-native English learners (group 2) and semi-experienced non-native English learners (group 3) on four types of ambiguous English sentences such as The polite actor thanked the old man who carried the black umbrella. Results of the study are discussed and a number of conclusions based on the results are summarized with regard to L2 learners’ differences in the performance to comprehend ambiguous syntactic structures.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 446
Author(s):  
Yair Lakretz ◽  
Stanislas Dehaene ◽  
Jean-Rémi King

Sentence comprehension requires inferring, from a sequence of words, the structure of syntactic relationships that bind these words into a semantic representation. Our limited ability to build some specific syntactic structures, such as nested center-embedded clauses (e.g., “The dog that the cat that the mouse bit chased ran away”), suggests a striking capacity limitation of sentence processing, and thus offers a window to understand how the human brain processes sentences. Here, we review the main hypotheses proposed in psycholinguistics to explain such capacity limitation. We then introduce an alternative approach, derived from our recent work on artificial neural networks optimized for language modeling, and predict that capacity limitation derives from the emergence of sparse and feature-specific syntactic units. Unlike psycholinguistic theories, our neural network-based framework provides precise capacity-limit predictions without making any a priori assumptions about the form of the grammar or parser. Finally, we discuss how our framework may clarify the mechanistic underpinning of language processing and its limitations in the human brain.


Author(s):  
Lauri Karttunen

The article introduces the basic concepts of finite-state language processing: regular languages and relations, finite-state automata, and regular expressions. Many basic steps in language processing, ranging from tokenization, to phonological and morphological analysis, disambiguation, spelling correction, and shallow parsing, can be performed efficiently by means of finite-state transducers. The article discusses examples of finite-state languages and relations. Finite-state networks can represent only a subset of all possible languages and relations; that is, only some languages are finite-state languages. Furthermore, this article introduces two types of complex regular expressions that have many linguistic applications, restriction and replacement. Finally, the article discusses the properties of finite-state automata. The three important properties of networks are: that they are epsilon free, deterministic, and minimal. If a network encodes a regular language and if it is epsilon free, deterministic, and minimal, the network is guaranteed to be the best encoding for that language.


Sign in / Sign up

Export Citation Format

Share Document