scholarly journals Is Subjective Duration a Signature of Coding Efficiency?

2010 ◽  
Vol 10 (7) ◽  
pp. 1405-1405
Author(s):  
D. Eagleman ◽  
V. Pariyadath
2014 ◽  
Vol 1 (1) ◽  
pp. 1-12 ◽  
Author(s):  
William J. Matthews ◽  
Devin B. Terhune ◽  
Hedderik van Rijn ◽  
David M. Eagleman ◽  
Marc A. Sommer ◽  
...  

2009 ◽  
Vol 364 (1525) ◽  
pp. 1841-1851 ◽  
Author(s):  
David M. Eagleman ◽  
Vani Pariyadath

Perceived duration is conventionally assumed to correspond with objective duration, but a growing literature suggests a more complex picture. For example, repeated stimuli appear briefer in duration than a novel stimulus of equal physical duration. We suggest that such duration illusions appear to parallel the neural phenomenon of repetition suppression, and we marshal evidence for a new hypothesis: the experience of duration is a signature of the amount of energy expended in representing a stimulus, i.e. the coding efficiency. This novel hypothesis offers a unified explanation for almost a dozen illusions in the literature in which subjective duration is modulated by properties of the stimulus such as size, brightness, motion and rate of flicker.


1992 ◽  
Vol 139 (2) ◽  
pp. 224 ◽  
Author(s):  
A.B. Johannessen ◽  
R. Prasad ◽  
N.B.J. Weyland ◽  
J.H. Bons

Linguistics ◽  
2020 ◽  
Vol 59 (1) ◽  
pp. 123-174
Author(s):  
Martin Haspelmath

Abstract Argument coding splits such as differential (= split) object marking and split ergative marking have long been known to be universal tendencies, but the generalizations have not been formulated in their full generality before. In particular, ditransitive constructions have rarely been taken into account, and scenario splits have often been treated separately. Here I argue that all these patterns can be understood in terms of the usual association of role rank (highly ranked A and R, low-ranked P and T) and referential prominence (locuphoric person, animacy, definiteness, etc.). At the most general level, the role-reference association universal says that deviations from usual associations of role rank and referential prominence tend to be coded by longer grammatical forms. In other words, A and R tend to be referentially prominent in language use, while P and T are less prominent, and when less usual associations need to be expressed, languages often require special coding by means of additional flags (case-markers and adpositions) or additional verbal voice coding (e.g., inverse or passive markers). I argue that role-reference associations are an instance of the even more general pattern of form-frequency correspondences, and that the resulting coding asymmetries can all be explained by frequency-based predictability and coding efficiency.


Biosystems ◽  
2001 ◽  
Vol 62 (1-3) ◽  
pp. 87-97 ◽  
Author(s):  
Peter N. Steinmetz ◽  
Amit Manwani ◽  
Christof Koch

Perception ◽  
10.1068/p2996 ◽  
2000 ◽  
Vol 29 (9) ◽  
pp. 1041-1055 ◽  
Author(s):  
Nuala Brady ◽  
David J Field

2015 ◽  
Vol 740 ◽  
pp. 652-655
Author(s):  
Qian Huang ◽  
Feng Xu

Interlaced scanning has been widely used as a trade-off solution between picture quality and transmission bandwidth since the invention of television. During the past decades, various interlaced-to-progressive conversion algorithms have been proposed to improve subjective quality or coding efficiency. However, almost all the researchers concentrate on general cases, without making full use of specific application scenarios. Based on extensive investigations, eliminating visual artifacts in areas of subtitles and station captions for interlaced sports and news videos is still an unsolved problem, which will be addressed in this paper. Firstly, motion estimation is performed between field pictures. Secondly, text edge detection is proposed for sports and news videos. Finally, different processing strategies are applied to text regions and non-text regions. Experimental results show that the proposed method can generate much better text content than existing algorithms. In addition, it is quite stable for non-text parts.


Sign in / Sign up

Export Citation Format

Share Document