syntactic parsing
Recently Published Documents


TOTAL DOCUMENTS

146
(FIVE YEARS 29)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Vol 14 (3) ◽  
pp. 252-268
Author(s):  
Vanessa Joosen

Children's literature studies has been relatively slow in adopting techniques from digital humanities. This article explains a method for digitising, annotating, and analysing texts in xml to investigate the implicit age norms that children's books convey. The case studies are seventeen books by Bart Moeyaert and La Belle Sauvage by Philip Pullman. The analysis of speech distribution, topic modelling, syntactic parsing, and lexical analysis with digital tools adds information about implicit age norms that can support and inspire narrative analyses with close reading.


2021 ◽  
Author(s):  
Yida Xin ◽  
Henry Lieberman ◽  
Peter Chin

Syntactic parsing technologies have become significantly more robust thanks to advancements in their underlying statistical and Deep Neural Network (DNN) techniques: most modern syntactic parsers can produce a syntactic parse tree for almost any sentence, including ones that may not be strictly grammatical. Despite improved robustness, such parsers still do not reflect the alternatives in parsing that are intrinsic in syntactic ambiguities. Two most notable such ambiguities are prepositional phrase (PP) attachment ambiguities and pronoun coreference ambiguities. In this paper, we discuss PatchComm, which uses commonsense knowledge to help resolve both kinds of ambiguities. To the best of our knowledge, we are the first to propose the general-purpose approach of using external commonsense knowledge bases to guide syntactic parsers. We evaluated PatchComm against the state-of-the-art (SOTA) spaCy parser on a PP attachment task and against the SOTA NeuralCoref module on a coreference task. Results show that PatchComm is successful at detecting syntactic ambiguities and using commonsense knowledge to help resolve them.


2021 ◽  
Vol 1 (1) ◽  
pp. 01-07
Author(s):  
Karisma Erikson Tarigan ◽  
Margaret Stevani

This study characterized the complex predicate and multiple events where the multi-verb single clause realises a single event in syntax and examined the complex sentences containing multiple verbal predicates. This study used the descriptive qualitative method. The data sources used were sentences containing karo sentence clauses and was classified based on the elements of complex Predicates in a tree diagram and the RRG account of nexus-juncture relations theory by Nolan, 2005 & Van Valin, 2005. The findings showed that event, argument, and semantic could be realized in syntactic meaning to reveal complex predicates. The tightest syntactic linkages embodied the closer semantic relations and it was signaled by word order. Most of the complex predicates in Karo language have an embedded object.  The core in the nucleus could be appeared not only as one core but two or more complex predicates and it followed by an argument with the form V+V+N and in the form of V+V+N. One argument (Participant/Actor) that involved one core. It assumed that there might be one participant in two events, and there may be two participants in one event. All of Karo language sentences have at least one NP + one VP and they consisted of more than one complex predicates.


2021 ◽  
Vol 15 ◽  
Author(s):  
Laura Jiménez-Ortega ◽  
Esperanza Badaya ◽  
Pilar Casado ◽  
Sabela Fondevila ◽  
David Hernández-Gutiérrez ◽  
...  

Syntactic processing has often been considered an utmost example of unconscious automatic processing. In this line, it has been demonstrated that masked words containing syntactic anomalies are processed by our brain triggering event related potential (ERP) components similar to the ones triggered by conscious syntactic anomalies, thus supporting the automatic nature of the syntactic processing. Conversely, recent evidence also points out that regardless of the level of awareness, emotional information and other relevant extralinguistic information modulate conscious syntactic processing too. These results are also in line with suggestions that, under certain circumstances, syntactic processing could also be flexible and context-dependent. However, the study of the concomitant automatic but flexible conception of syntactic parsing is very scarce. Hence, to this aim, we examined whether and how masked emotional words (positive, negative, and neutral masked adjectives) containing morphosyntactic anomalies (half of the cases) affect linguistic comprehension of an ongoing unmasked sentence that also can contain a number agreement anomaly between the noun and the verb. ERP components were observed to emotional information (EPN), masked anomalies (LAN and a weak P600), and unmasked ones (LAN/N400 and P600). Furthermore, interactions in the processing of conscious and unconscious morphosyntactic anomalies and between unconscious emotional information and conscious anomalies were detected. The findings support, on the one hand, the automatic nature of syntax, given that syntactic components LAN and P600 were observed to unconscious anomalies. On the other hand, the flexible, permeable, and context-dependent nature of the syntactic processing is also supported, since unconscious information modulated conscious syntactic components. This double nature of syntactic processing is in line with theories of automaticity, suggesting that even unconscious/automatic, syntactic processing is flexible, adaptable, and context-dependent.


Author(s):  
Shumin Shi ◽  
Dan Luo ◽  
Xing Wu ◽  
Congjun Long ◽  
Heyan Huang

Dependency parsing is an important task for Natural Language Processing (NLP). However, a mature parser requires a large treebank for training, which is still extremely costly to create. Tibetan is a kind of extremely low-resource language for NLP, there is no available Tibetan dependency treebank, which is currently obtained by manual annotation. Furthermore, there are few related kinds of research on the construction of treebank. We propose a novel method of multi-level chunk-based syntactic parsing to complete constituent-to-dependency treebank conversion for Tibetan under scarce conditions. Our method mines more dependencies of Tibetan sentences, builds a high-quality Tibetan dependency tree corpus, and makes fuller use of the inherent laws of the language itself. We train the dependency parsing models on the dependency treebank obtained by the preliminary transformation. The model achieves 86.5% accuracy, 96% LAS, and 97.85% UAS, which exceeds the optimal results of existing conversion methods. The experimental results show that our method has the potential to use a low-resource setting, which means we not only solve the problem of scarce Tibetan dependency treebank but also avoid needless manual annotation. The method embodies the regularity of strong knowledge-guided linguistic analysis methods, which is of great significance to promote the research of Tibetan information processing.


2021 ◽  
Vol 6 (1) ◽  
pp. 10
Author(s):  
Hardian Zudianto ◽  
Ashadi Ashadi

Theories and practices in second language reading pedagogy often overlook the sentence processing description from the psycholinguistics perspective. Second language reading comprehension is easily associated with vocabulary learning or discourse strategy. Yet, such activities can lead to an unnatural way of reading such as translating vocabularies or pointing out information as required. Meanwhile the authentic way of reading should encourage a natural stream of ideas to be interpreted from sentence to sentence. As suggested by the sentence processing notion from the psycholinguistics point of view, syntax appears to be the key to effective and authentic reading as opposed to the general belief of semantic or discourse information being the primary concern. This article argues that understanding the architecture of sentence processing, with syntactic parsing at the core of the underlying mechanism, can offer insights into the second language reading pedagogy. The concepts of syntactic parsing, reanalysis, and sentence processing models are described to give the idea of how sentence processing works. Additionally, a critical review on the differences between L1 and L2 sentence processing is presented considering the recent debate on individual differences as significant indicators of nativelike L2 sentence processing. Lastly, implications for the L2 reading pedagogy and potential implementation in instructional setting are discussed.


2021 ◽  
Vol 5 (1) ◽  
pp. 7
Author(s):  
J. Gerard Wolff

This paper aims to describe how pattern recognition and scene analysis may with advantage be viewed from the perspective of the SP system (meaning the SP theory of intelligence and its realisation in the SP computer model (SPCM), both described in an appendix), and the strengths and potential of the system in those areas. In keeping with evidence for the importance of information compression (IC) in human learning, perception, and cognition, IC is central in the structure and workings of the SPCM. Most of that IC is achieved via the powerful concept of SP-multiple-alignment, which is largely responsible for the AI-related versatility of the system. With examples from the SPCM, the paper describes: how syntactic parsing and pattern recognition may be achieved, with corresponding potential for visual parsing and scene analysis; how those processes are robust in the face of errors in input data; how in keeping with what people do, the SP system can “see” things in its data that are not objectively present; the system can recognise things at multiple levels of abstraction and via part-whole hierarchies, and via an integration of the two; the system also has potential for the creation of a 3D construct from pictures of a 3D object from different viewpoints, and for the recognition of 3D entities.


Author(s):  
Alexander Gelbukh ◽  
José A. Martínez F. ◽  
Andres Verastegui ◽  
Alberto Ochoa

In this chapter, an exhaustive parser is presented. The parser was developed to be used in a natural language interface to databases (NLIDB) project. This chapter includes a brief description of state-of-the-art NLIDBs, including a description of the methods used and the performance of some interfaces. Some of the general problems in natural language interfaces to databases are also explained. The exhaustive parser was developed, aiming at improving the overall performance of the interface; therefore, the interface is also briefly described. This chapter also presents the drawbacks discovered during the experimental tests of the parser, which show that it is unsuitable for improving the NLIDB performance.


Sign in / Sign up

Export Citation Format

Share Document