Probability Distributions in Type Theory with Applications in Natural Language Syntax

Author(s):  
Krasimir Angelov
2007 ◽  
Vol 18 (2) ◽  
pp. 203-203
Author(s):  
C. Fox ◽  
M. Fernandez ◽  
S. Lappin

Language ◽  
2010 ◽  
Vol 86 (4) ◽  
pp. 945-948
Author(s):  
Robert D. Borsley

2021 ◽  
Author(s):  
Elliot Murphy ◽  
Emma Holmes ◽  
Karl Friston

Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating elementary syntactic objects. We argue that recently proposed principles of economy in language design—such as “minimal search” and “least effort” criteria from theoretical syntax—adhere to the FEP. This permits a greater degree of explanatory power to the FEP—with respect to higher language functions—and presents linguists with a grounding in first principles of notions pertaining to computability. More generally, we explore the possibility of migrating certain topics in linguistics over to the domain of fields that investigate the FEP, such as complex polysemy. We aim to align concerns of linguists with the normative model for organic self-organisation associated with the FEP, marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference.


2004 ◽  
Vol 13 (02) ◽  
pp. 333-365
Author(s):  
MANOLIS MARAGOUDAKIS ◽  
ARISTOMENIS THANOPOULOS ◽  
KYRIAKOS SGARBAS ◽  
NIKOS FAKOTAKIS

This paper introduces a statistical framework for extracting medical domain knowledge from heterogeneous corpora. The acquired information is incorporated into a natural language understanding agent and applied to DIKTIS, an existing web-based educational dialogue system for the chemotherapy of nosocomial and community acquired pneumonia, aiming at providing a more intelligent natural language interaction. Unlike the majority of existing dialogue understanding engines, the presented system automatically encodes semantic representation of a user's query using Bayesian networks. The structure of the networks is determined from annotated dialogue corpora using the Bayesian scoring method, thus eliminating the tedious and costly process of manually coding domain knowledge. The conditional probability distributions are estimated during a training phase using data obtained from the same set of dialogue acts. In order to cope with words absent from our restricted dialogue corpus, a separate offline module was incorporated, which estimates their semantic role from both medical and general raw text corpora, correlating them with known lexical-semantically similar words or predefined topics. Lexical similarity is identified on the basis of both contextual similarity and co-occurrence in conjunctive expressions. The evaluation of the platform was performed against the existing language natural understanding module of DIKTIS, the architecture of which is based on manually embedded domain knowledge.


2017 ◽  
Vol 10 (2) ◽  
pp. 193-207 ◽  
Author(s):  
SOFIA STROUSTRUP ◽  
MIKKEL WALLENTIN

abstractNatural language syntax has previously been thought to reflect abstract processing rules independent of meaning construction. However, grammatical categories may serve a functional role by allocating attention towards recurrent topics in discourse. Here, we show that listeners incorporate grammatical category into imagery when producing stick figure drawings from heard sentences, supporting the latter view. Participants listened to sentences with transitive verbs that independently varied whether a male or a female character (1) was mentioned first, (2) was the agent or recipient of an action, and (3) was the grammatical subject or object of the sentence. Replicating previous findings, we show that the first named character as well as the agent of the sentence tends to be drawn to the left in the image, probably reflecting left-to-right reading direction. But we also find that the grammatical subject of the sentence has a propensity to be drawn to the left of the object. We interpret this to suggest that grammatical category carries discursive meaning as an attention allocator. Our findings also highlight how language influences processes hitherto thought to be non-linguistic.


Sign in / Sign up

Export Citation Format

Share Document