The Discriminatory Power of Lexical Context for Alternations: An Information-theoretic Exploration

Author(s):  
Stefan Th. Gries
2019 ◽  
Vol 30 (3) ◽  
pp. 157-168
Author(s):  
Helmut Hildebrandt ◽  
Jana Schill ◽  
Jana Bördgen ◽  
Andreas Kastrup ◽  
Paul Eling

Abstract. This article explores the possibility of differentiating between patients suffering from Alzheimer’s disease (AD) and patients with other kinds of dementia by focusing on false alarms (FAs) on a picture recognition task (PRT). In Study 1, we compared AD and non-AD patients on the PRT and found that FAs discriminate well between these groups. Study 2 served to improve the discriminatory power of the FA score on the picture recognition task by adding associated pairs. Here, too, the FA score differentiated well between AD and non-AD patients, though the discriminatory power did not improve. The findings suggest that AD patients show a liberal response bias. Taken together, these studies suggest that FAs in picture recognition are of major importance for the clinical diagnosis of AD.


Author(s):  
Ryan Ka Yau Lai ◽  
Youngah Do

This article explores a method of creating confidence bounds for information-theoretic measures in linguistics, such as entropy, Kullback-Leibler Divergence (KLD), and mutual information. We show that a useful measure of uncertainty can be derived from simple statistical principles, namely the asymptotic distribution of the maximum likelihood estimator (MLE) and the delta method. Three case studies from phonology and corpus linguistics are used to demonstrate how to apply it and examine its robustness against common violations of its assumptions in linguistics, such as insufficient sample size and non-independence of data points.


2011 ◽  
Vol 30 (4) ◽  
pp. 801-804
Author(s):  
Xing-zai Lü ◽  
Zhen Wang ◽  
Jin-kang Zhu

Sign in / Sign up

Export Citation Format

Share Document