scholarly journals Distributional Semantics and Linguistic Theory

2020 ◽  
Vol 6 (1) ◽  
pp. 213-234 ◽  
Author(s):  
Gemma Boleda

Distributional semantics provides multidimensional, graded, empirically induced word representations that successfully capture many aspects of meaning in natural languages, as shown by a large body of research in computational linguistics; yet, its impact in theoretical linguistics has so far been limited. This review provides a critical discussion of the literature on distributional semantics, with an emphasis on methods and results that are relevant for theoretical linguistics, in three areas: semantic change, polysemy and composition, and the grammar–semantics interface (specifically, the interface of semantics with syntax and with derivational morphology). The goal of this review is to foster greater cross-fertilization of theoretical and computational approaches to language as a means to advance our collective knowledge of how it works.

2005 ◽  
Vol 14 (2) ◽  
pp. 112-123 ◽  
Author(s):  
Anthony T. Cacace ◽  
Dennis J. McFarland

Purpose: This article argues for the use of modality specificity as a unifying framework by which to conceptualize and diagnose central auditory processing disorder (CAPD). The intent is to generate dialogue and critical discussion in this area of study. Method: Research in the cognitive, behavioral, and neural sciences that relates to the concept of modality specificity was reviewed and synthesized. Results: Modality specificity has a long history as an organizing construct within a diverse collection of mainstream scientific disciplines. The principle of modality specificity was contrasted with the unimodal inclusive framework, which holds that auditory tests alone are sufficient to make the CAPD diagnosis. Evidence from a large body of data demonstrated that the unimodal framework was unable to delineate modality-specific processes from more generalized dysfunction; it lacked discriminant validity and resulted in an incomplete assessment. Consequently, any hypothetical model resulting from incomplete assessments or potential therapies that are based on indeterminate diagnoses are themselves questionable, and caution should be used in their application. Conclusions: Improving specificity of diagnosis is an imperative core issue to the area of CAPD. Without specificity, the concept has little explanatory power. Because of serious flaws in concept and design, the unimodal inclusive framework should be abandoned in favor of a more valid approach that uses modality specificity.


2021 ◽  
Author(s):  
Victoria Yantseva

The goal of this work is to study the social construction of migrant categories and immigration discourse on Swedish Facebook in the last decade. I combine the insights from computational linguistics and distributional semantics approach with those from classical sociological theories in order to explore a corpus of more than 1M Facebook posts. This allows to compare the intended meanings of various linguistic labels denoting voluntary versus forced character of migration, as well as to distinguish the most salient themes that constitute the Facebook discourse. The study concludes that, although Facebook seems to have the highest potential in the promotion of tolerance and support for migrants, its audience is nevertheless active in the discursive discrimination of those identified as "refugees" or "immigrants". The results of the study are then related to the technological design of new media and the overall social and political climate surrounding the Swedish immigration agenda.


2017 ◽  
Vol 3 (1) ◽  
pp. 11-24
Author(s):  
Martina A. Rodda ◽  
Marco S. G. Senaldi ◽  
Alessandro Lenci

Author(s):  
Ruket Çakici

Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Dependency parsing aims to predict these dependency relations between lexical units to retrieve information, mostly in the form of semantic interpretation or syntactic structure. Parsing is usually considered as the first step of Natural Language Processing (NLP). To train statistical parsers, a sample of data annotated with necessary information is required. There are different views on how informative or functional representation of natural language sentences should be. There are different constraints on the design process such as: 1) how intuitive (natural) it is, 2) how easy to extract information from it is, and 3) how appropriately and unambiguously it represents the phenomena that occur in natural languages. In this article, a review of statistical dependency parsing for different languages will be made and current challenges of designing dependency treebanks and dependency parsing will be discussed.


2018 ◽  
Vol 24 (5) ◽  
pp. 649-676 ◽  
Author(s):  
XURI TANG

AbstractThis paper reviews the state-of-the-art of one emergent field in computational linguistics—semantic change computation. It summarizes the literature by proposing a framework that identifies five components in the field: diachronic corpus, diachronic word sense characterization, change modelling, evaluation and data visualization. Despite its potentials, the review shows that current studies are mainly focused on testifying hypotheses of semantic change from theoretical linguistics and that several core issues remain to be tackled: the need of diachronic corpora for languages other than English, the comparison and development of approaches to diachronic word sense characterization and change modelling, the need of comprehensive evaluation data and further exploration of data visualization techniques for hypothesis justification.


2016 ◽  
Vol 42 (4) ◽  
pp. 619-635 ◽  
Author(s):  
Gemma Boleda ◽  
Aurélie Herbelot

Formal Semantics and Distributional Semantics are two very influential semantic frameworks in Computational Linguistics. Formal Semantics is based on a symbolic tradition and centered around the inferential properties of language. Distributional Semantics is statistical and data-driven, and focuses on aspects of meaning related to descriptive content. The two frameworks are complementary in their strengths, and this has motivated interest in combining them into an overarching semantic framework: a “Formal Distributional Semantics.” Given the fundamentally different natures of the two paradigms, however, building an integrative framework poses significant theoretical and engineering challenges. The present issue of Computational Linguistics advances the state of the art in Formal Distributional Semantics; this introductory article explains the motivation behind it and summarizes the contributions of previous work on the topic, providing the necessary background for the articles that follow.


2012 ◽  
pp. 2117-2124
Author(s):  
Ruket Çakici

Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Dependency parsing aims to predict these dependency relations between lexical units to retrieve information, mostly in the form of semantic interpretation or syntactic structure. Parsing is usually considered as the first step of Natural Language Processing (NLP). To train statistical parsers, a sample of data annotated with necessary information is required. There are different views on how informative or functional representation of natural language sentences should be. There are different constraints on the design process such as: 1) how intuitive (natural) it is, 2) how easy to extract information from it is, and 3) how appropriately and unambiguously it represents the phenomena that occur in natural languages. In this article, a review of statistical dependency parsing for different languages will be made and current challenges of designing dependency treebanks and dependency parsing will be discussed.


2020 ◽  
Author(s):  
Tony Seimon ◽  
Janeth Robinson

Computational Linguistics and Artificial Intelligence are increasingly demanding more effective contributions from language studies to Natural Language Processing. This fact has driven Applied Linguistics to produce knowledge to offer reliable models of linguistic production, which are not based only on formal rules of context-free grammars; but, in another way, take the natural language understanding as processing parameter. In a complementary way, there has been an increase in the scope of Applied Linguistics, the need to implement the processing of natural languages in the interaction between human and computer, incorporating the machine into its research and application practices. Among these demands, the search for models that extrapolate the order of prayer stands out, in particular by turning to the structure of texts and, consequently, to textual genres. Situating in this context, this article aims to contribute with solutions to the demands in relation to the study of conversational structures. Thus, it aims to offer a linguistic model of the grammatical systems that perform the potential structures for the conversations in various contexts. More specifically, it produces a model capable of describing the way in which the system networks are made and, consequently, how this dynamic explains the organization of the conversations.


Author(s):  
John Nerbonne

This article examines the application of natural language processing to computer-assisted language learning (CALL) including the history of work in this field over the last thirtyfive years and focuses on current developments and opportunities. It always refers to programs designed to help people learn foreign languages. CALL is a large field — much larger than computational linguistics. This article outlines the areas of CALL to which computational linguistics (CL) can be applied. CL programs process natural languages such as English and Spanish, and the techniques are therefore often referred to as natural language processing (NLP). NLP is enlisted in several ways in CALL to provide lemmatized access to corpora for advanced learners seeking subtleties unavailable in grammars and dictionaries. It also provides morphological analysis and subsequent dictionary access for words unknown to readers and to parse user input and diagnose morphological and syntactic errors.


Sign in / Sign up

Export Citation Format

Share Document