scholarly journals Linguistics fit for dialogue

2003 ◽  
Vol 26 (6) ◽  
pp. 678-678 ◽  
Author(s):  
Simon Garrod ◽  
Martin J. Pickering

Foundations of Language (Jackendoff 2002) sets out to reconcile generative accounts of language structure with psychological accounts of language processing. We argue that Jackendoff's “parallel architecture” is a particularly appropriate linguistic framework for the interactive alignment account of dialogue processing. It offers a helpful definition of linguistic levels of representation, it gives an interesting account of routine expressions, and it supports radical incrementality in processing.

2021 ◽  
Vol 1 ◽  
pp. 2691-2700
Author(s):  
Stefan Goetz ◽  
Dennis Horber ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractThe success of complex product development projects strongly depends on the clear definition of target factors that allow a reliable statement about the fulfilment of the product requirements. In the context of tolerancing and robust design, Key Characteristics (KCs) have been established for this purpose and form the basis for all downstream activities. In order to integrate the activities related to the KC definition into product development as early as possible, the often vaguely formulated requirements must be translated into quantifiable KCs. However, this is primarily a manual process, so the results strongly depend on the experience of the design engineer.In order to overcome this problem, a novel computer-aided approach is presented, which automatically derives associated functions and KCs already during the definition of product requirements. The approach uses natural language processing and formalized design knowledge to extract and provide implicit information from the requirements. This leads to a clear definition of the requirements and KCs and thus creates a founded basis for robustness evaluation at the beginning of the concept design stage. The approach is exemplarily applied to a window lifter.


2010 ◽  
Vol 17 (2) ◽  
pp. 207-241 ◽  
Author(s):  
Francis Cornish

The traditional definition of anaphora in purely co-textual terms as a relation between two co-occurring expressions is in wide currency in theoretical and descriptive studies of the phenomenon. Indeed, it is currently adopted in on-line psycholinguistic experiments on the interpretation of anaphors, and is the basis for all computational approaches to automatic anaphor resolution (see Mitkov 2002). Under this conception, the anaphor, a referentially-dependent expression type, requires “saturation” by an appropriate referentially-autonomous, lexically-based expression — the antecedent — in order to achieve full sense and reference. However, this definition needs to be re-examined in the light of the ways in which real texts operate and are understood, where the resulting picture is rather different. The article aims to show that the co-textual conception is misconceived, and that anaphora is essentially an integrative, discourse-creating procedure involving a three-way relationship between an “antecedent trigger”, an anaphoric predication, and a salient discourse representation. It is shown that it is only in terms of a dynamic interaction amongst the interdependent dimensions of text and discourse, as well as context, that the true complexity of anaphoric reference may be satisfactorily described. The article is intended as a contribution to the broader debate within the pages of this journal and elsewhere between the formalist and the functionalist accounts of language structure and use.


2008 ◽  
Vol 34 (4) ◽  
pp. 597-614 ◽  
Author(s):  
Trevor Cohn ◽  
Chris Callison-Burch ◽  
Mirella Lapata

Automatic paraphrasing is an important component in many natural language processing tasks. In this article we present a new parallel corpus with paraphrase annotations. We adopt a definition of paraphrase based on word alignments and show that it yields high inter-annotator agreement. As Kappa is suited to nominal data, we employ an alternative agreement statistic which is appropriate for structured alignment tasks. We discuss how the corpus can be usefully employed in evaluating paraphrase systems automatically (e.g., by measuring precision, recall, and F1) and also in developing linguistically rich paraphrase models based on syntactic structure.


1993 ◽  
Vol 15 (4) ◽  
pp. 380-395 ◽  
Author(s):  
Margaret L. Placier

Definitions of key policy terms are important elements in policy construction. Accordingly, the power to define such terms is a linguistic marker of relationships among players in the policy process. Combining a linguistic framework with the cultural framework of Marshall, Mitchell, and Wirt (1989), this article traces definition of the term at risk in the context of one state, Arizona. Researchers in the Department of Education used the definition process as an opportunity to enhance the department’s prestige and power in relation to other policy-making bodies.


2005 ◽  
Vol 31 (4) ◽  
pp. 439-475 ◽  
Author(s):  
Julie Weeds ◽  
David Weir

Techniques that exploit knowledge of distributional similarity between words have been proposed in many areas of Natural Language Processing. For example, in language modeling, the sparse data problem can be alleviated by estimating the probabilities of unseen co-occurrences of events from the probabilities of seen co-occurrences of similar events. In other applications, distributional similarity is taken to be an approximation to semantic similarity. However, due to the wide range of potential applications and the lack of a strict definition of the concept of distributional similarity, many methods of calculating distributional similarity have been proposed or adopted. In this work, a flexible, parameterized framework for calculating distributional similarity is proposed. Within this framework, the problem of finding distributionally similar words is cast as one of co-occurrence retrieval (CR) for which precision and recall can be measured by analogy with the way they are measured in document retrieval. As will be shown, a number of popular existing measures of distributional similarity are simulated with parameter settings within the CR framework. In this article, the CR framework is then used to systematically investigate three fundamental questions concerning distributional similarity. First, is the relationship of lexical similarity necessarily symmetric, or are there advantages to be gained from considering it as an asymmetric relationship? Second, are some co-occurrences inherently more salient than others in the calculation of distributional similarity? Third, is it necessary to consider the difference in the extent to which each word occurs in each co-occurrence type? Two application-based tasks are used for evaluation: automatic thesaurus generation and pseudo-disambiguation. It is possible to achieve significantly better results on both these tasks by varying the parameters within the CR framework rather than using other existing distributional similarity measures; it will also be shown that any single unparameterized measure is unlikely to be able to do better on both tasks. This is due to an inherent asymmetry in lexical substitutability and therefore also in lexical distributional similarity.


Author(s):  
Jana Papcunová ◽  
Marcel Martončik ◽  
Denisa Fedáková ◽  
Michal Kentoš ◽  
Miroslava Bozogáňová ◽  
...  

AbstractHate speech should be tackled and prosecuted based on how it is operationalized. However, the existing theoretical definitions of hate speech are not sufficiently fleshed out or easily operable. To overcome this inadequacy, and with the help of interdisciplinary experts, we propose an empirical definition of hate speech by providing a list of 10 hate speech indicators and the rationale behind them (the indicators refer to specific, observable, and measurable characteristics that offer a practical definition of hate speech). A preliminary exploratory examination of the structure of hate speech, with the focus on comments related to migrants (one of the most reported grounds of hate speech), revealed that two indicators in particular, denial of human rights and promoting violent behavior, occupy a central role in the network of indicators. Furthermore, we discuss the practical implications of the proposed hate speech indicators—especially (semi-)automatic detection using the latest natural language processing (NLP) and machine learning (ML) methods. Having a set of quantifiable indicators could benefit researchers, human right activists, educators, analysts, and regulators by providing them with a pragmatic approach to hate speech assessment and detection.


2019 ◽  
Author(s):  
Nick Papoulias

Background. Context-free grammars (CFGs) and Parsing-expression Grammars (PEGs) are the two main formalisms used by formal specifications and parsing frameworks to describe programming languages. They mainly differ in the definition of the choice operator, describing language alternatives. CFGs support the use of non-deterministic choice (i.e., unordered choice), where all alternatives are equally explored. PEGs support a deterministic choice (i.e., ordered choice), where alternatives are explored in strict succession. In practice the two formalisms, are used through concrete classes of parsing algorithms (such as Left-to-right, rightmost derivation (LR) for CFGs and Packrat parsing for PEGs), that follow the semantics of the formal operators. Problem Statement. Neither the two formalisms, nor the accompanying algorithms are sufficient for a complete description of common cases arising in language design. In order to properly handle ambiguity, recursion, precedence or associativity, parsing frameworks either introduce implementation specific directives or ask users to refactor their grammars to fit the needs of the framework/algorithm/formalism combo. This introduces significant complexity even in simple cases and results in incompatible grammar specifications. Our Proposal. We introduce Multi-Ordered Grammars (MOGs) as an alternative to the CFG and PEG formalisms. MOGs aim for a better exploration of ambiguity, ordering, recursion and associativity during language design. This is achieved by (a) allowing both deterministic and non-deterministic choices to co-exist, and (b) introducing a form of recursive and scoped ordering. The formalism is accompanied by a new parsing algorithm (Gray) that extends chart parsing (normally used for Natural Language Processing) with the proposed MOG operators. Results. We conduct two case-studies to assess the expressiveness of MOGs, compared to CFGs and PEGs. The first consists of two idealized examples from literature (an expression grammar and a simple procedural language). The second examines a real-world case (the entire Smalltalk grammar and eleven new Smalltalk extensions) probing the complexities of practical needs. We show that in comparison, MOGs are able to reduce complexity and naturally express language constructs, without resorting to implementation specific directives. Conclusion. We conclude that combining deterministic and non-deterministic choices in a single grammar specification is indeed not only possible but also beneficial. Moreover, augmented by operators for recursive and scoped ordering the resulting multi-ordered formalism presents a viable alternative to both CFGs and PEGs. Concrete implementations of MOGs can be constructed by extending chart parsing with MOG operators for recursive and scoped ordering.


Author(s):  
Ray Jackendoff ◽  
Jenny Audring

The Texture of the Lexicon explores three interwoven themes: a morphological theory, the structure of the lexicon, and an integrated account of the language capacity and its place in the mind. These themes together constitute the theory of Relational Morphology (RM), extending the Parallel Architecture of Jackendoff’s groundbreaking Foundations of Language. Part I (chapters 1–3) situates morphology in the architecture of the language faculty, and introduces a novel formalism that unifies the treatment of morphological patterns, from totally productive to highly marginal. Two major points emerge. First, traditional word formation rules and realization rules should be replaced by declarative schemas, formulated in the same terms as words. Hence the grammar should really be thought of as part of the lexicon. Second, the traditional emphasis on productive patterns, to the detriment of nonproductive patterns, is misguided; linguistic theory can and should encompass them both. Part II (chapters 4–6) puts the theory to the test, applying it to a wide range of familiar and less familiar morphological phenomena. Part III (chapters 7–9) connects RM with language processing, language acquisition, and a broad selection of linguistic and nonlinguistic phenomena beyond morphology. The framework is therefore attractive not only for its ability to account insightfully for morphological phenomena, but equally for its contribution to the integration of linguistic theory, psycholinguistics, and human cognition.


2018 ◽  
Vol 12 (2) ◽  
pp. 372-404
Author(s):  
VERA FLOCKE

AbstractA definition of a property P is impredicative if it quantifies over a domain to which P belongs. Due to influential arguments by Ramsey and Gödel, impredicative mathematics is often thought to possess special metaphysical commitments. The reason is that an impredicative definition of a property P does not have its intended meaning unless P exists, suggesting that the existence of P cannot depend on its explicit definition. Carnap (1937 [1934], p. 164) argues, however, that accepting impredicative definitions amounts to choosing a “form of language” and is free from metaphysical implications. This article explains this view in its historical context. I discuss the development of Carnap’s thought on the foundations of mathematics from the mid-1920s to the mid-1930s, concluding with an account of Carnap’s (1937 [1934]) non-Platonistic defense of impredicativity. This discussion is also important for understanding Carnap’s influential views on ontology more generally, since Carnap’s (1937 [1934]) view, according to which accepting impredicative definitions amounts to choosing a “form of language”, is an early precursor of the view that Carnap presents in “Empiricism, Semantics and Ontology” (1956 [1950]), according to which referring to abstract entities amounts to accepting a “linguistic framework”.


Sign in / Sign up

Export Citation Format

Share Document