scholarly journals Verb similarity: Comparing corpus and psycholinguistic data

2018 ◽  
Vol 14 (2) ◽  
pp. 275-307
Author(s):  
Lara Gil-Vallejo ◽  
Marta Coll-Florit ◽  
Irene Castellón ◽  
Jordi Turmo

Abstract Similarity, which plays a key role in fields like cognitive science, psycholinguistics and natural language processing, is a broad and multifaceted concept. In this work we analyse how two approaches that belong to different perspectives, the corpus view and the psycholinguistic view, articulate similarity between verb senses in Spanish. Specifically, we compare the similarity between verb senses based on their argument structure, which is captured through semantic roles, with their similarity defined by word associations. We address the question of whether verb argument structure, which reflects the expression of the events, and word associations, which are related to the speakers’ organization of the mental lexicon, shape similarity between verbs in a congruent manner, a topic which has not been explored previously. While we find significant correlations between verb sense similarities obtained from these two approaches, our findings also highlight some discrepancies between them and the importance of the degree of abstraction of the corpus annotation and psycholinguistic representations.

2015 ◽  
Vol 48 ◽  
pp. 70-89 ◽  
Author(s):  
Alba Luzondo-Oyón ◽  
Francisco J. Ruiz de Mendoza-Ibáñez

Author(s):  
Toyoaki Nishida

People are proficient in collaboratively forming and maintaining gatherings thereby shaping and cultivating collective thoughts through fluent conversational interactions. A big challenge is to develop a technology for augmenting the conversational environment so that people can conduct even better conversational interactions for collective intelligence and creation. Conversational informatics is a field of research that focuses on investigating conversational interactions and designing intelligent artifacts that can augment conversational interactions. The field draws on a foundation provided by artificial intelligence, natural language processing, speech and image processing, cognitive science, and conversation analysis. In this article, the author overviews a methodology for developing augmented conversational environment and major achievements. The author also discusses issues for making agents empathic so that they can induce sustained and constructive engagement with people.


2016 ◽  
Vol 39 ◽  
Author(s):  
Carlos Gómez-Rodríguez

AbstractResearchers, motivated by the need to improve the efficiency of natural language processing tools to handle web-scale data, have recently arrived at models that remarkably match the expected features of human language processing under the Now-or-Never bottleneck framework. This provides additional support for said framework and highlights the research potential in the interaction between applied computational linguistics and cognitive science.


2021 ◽  
Vol 9 ◽  
pp. 226-242
Author(s):  
Zhaofeng Wu ◽  
Hao Peng ◽  
Noah A. Smith

Abstract For natural language processing systems, two kinds of evidence support the use of text representations from neural language models “pretrained” on large unannotated corpora: performance on application-inspired benchmarks (Peters et al., 2018, inter alia), and the emergence of syntactic abstractions in those representations (Tenney et al., 2019, inter alia). On the other hand, the lack of grounded supervision calls into question how well these representations can ever capture meaning (Bender and Koller, 2020). We apply novel probes to recent language models— specifically focusing on predicate-argument structure as operationalized by semantic dependencies (Ivanova et al., 2012)—and find that, unlike syntax, semantics is not brought to the surface by today’s pretrained models. We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning, yielding benefits to natural language understanding (NLU) tasks in the GLUE benchmark. This approach demonstrates the potential for general-purpose (rather than task-specific) linguistic supervision, above and beyond conventional pretraining and finetuning. Several diagnostics help to localize the benefits of our approach.1


1997 ◽  
Vol 3 (4) ◽  
pp. 279-315 ◽  
Author(s):  
KEN BARKER ◽  
TERRY COPECK ◽  
STAN SZPAKOWICZ ◽  
SYLVAIN DELISLE

Case systems abound in natural language processing. Almost any attempt to recognize and uniformly represent relationships within a clause – a unit at the centre of any linguistic system that goes beyond word level statistics – must be based on semantic roles drawn from a small, closed set. The set of roles describing relationships between a verb and its arguments within a clause is a case system. What is required of such a case system? How does a natural language practitioner build a system that is complete and detailed yet practical and natural? This paper chronicles the construction of a case system from its origin in English marker words to its successful application in the analysis of English text.


2020 ◽  
pp. 3-17
Author(s):  
Peter Nabende

Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.


Sign in / Sign up

Export Citation Format

Share Document