scholarly journals Time without clocks: Human time perception based on perceptual classification

2017 ◽  
Author(s):  
Warrick Roseboom ◽  
Zafeirios Fountas ◽  
Kyriacos Nikiforou ◽  
David Bhowmik ◽  
Murray Shanahan ◽  
...  

Despite being a fundamental dimension of experience, how the human brain generates the perception of time remains unknown. Here, we provide a novel explanation for how human time perception might be accomplished, based on non-temporal perceptual clas-sification processes. To demonstrate this proposal, we built an artificial neural system centred on a feed-forward image classification network, functionally similar to human visual processing. In this system, input videos of natural scenes drive changes in network activation, and accumulation of salient changes in activation are used to estimate duration. Estimates produced by this system match human reports made about the same videos, replicating key qualitative biases, including differentiating between scenes of walking around a busy city or sitting in a cafe or office. Our approach provides a working model of duration perception from stimulus to estimation and presents a new direction for examining the foundations of this central aspect of human experience.

2017 ◽  
Author(s):  
Darren Rhodes

Time is a fundamental dimension of human perception, cognition and action, as the perception and cognition of temporal information is essential for everyday activities and survival. Innumerable studies have investigated the perception of time over the last 100 years, but the neural and computational bases for the processing of time remains unknown. First, we present a brief history of research and the methods used in time perception and then discuss the psychophysical approach to time, extant models of time perception, and advancing inconsistencies between each account that this review aims to bridge the gap between. Recent work has advocated a Bayesian approach to time perception. This framework has been applied to both duration and perceived timing, where prior expectations about when a stimulus might occur in the future (prior distribution) are combined with current sensory evidence (likelihood function) in order to generate the perception of temporal properties (posterior distribution). In general, these models predict that the brain uses temporal expectations to bias perception in a way that stimuli are ‘regularized’ i.e. stimuli look more like what has been seen before. Evidence for this framework has been found using human psychophysical testing (experimental methods to quantify behaviour in the perceptual system). Finally, an outlook for how these models can advance future research in temporal perception is discussed.


2008 ◽  
Vol 275 (1649) ◽  
pp. 2299-2308 ◽  
Author(s):  
M To ◽  
P.G Lovell ◽  
T Troscianko ◽  
D.J Tolhurst

Natural visual scenes are rich in information, and any neural system analysing them must piece together the many messages from large arrays of diverse feature detectors. It is known how threshold detection of compound visual stimuli (sinusoidal gratings) is determined by their components' thresholds. We investigate whether similar combination rules apply to the perception of the complex and suprathreshold visual elements in naturalistic visual images. Observers gave magnitude estimations (ratings) of the perceived differences between pairs of images made from photographs of natural scenes. Images in some pairs differed along one stimulus dimension such as object colour, location, size or blur. But, for other image pairs, there were composite differences along two dimensions (e.g. both colour and object-location might change). We examined whether the ratings for such composite pairs could be predicted from the two ratings for the respective pairs in which only one stimulus dimension had changed. We found a pooling relationship similar to that proposed for simple stimuli: Minkowski summation with exponent 2.84 yielded the best predictive power ( r =0.96), an exponent similar to that generally reported for compound grating detection. This suggests that theories based on detecting simple stimuli can encompass visual processing of complex, suprathreshold stimuli.


2022 ◽  
Author(s):  
Yongrong Qiu ◽  
David A Klindt ◽  
Klaudia P Szatko ◽  
Dominic Gonschorek ◽  
Larissa Hoefling ◽  
...  

Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage coding principles such as efficient coding of natural environments. Visual systems, however, have evolved to efficiently process input from the natural environment. Here, we present a normative network regularization for system identification models by incorporating, as a regularizer, the efficient coding hypothesis, which states that neural response properties of sensory representations are strongly shaped by the need to preserve most of the stimulus information with limited resources. Using this approach, we explored if a system identification model can be improved by sharing its convolutional filters with those of an autoencoder which aims to efficiently encode natural stimuli. To this end, we built a hybrid model to predict the responses of retinal neurons to noise stimuli. This approach did not only yield a higher performance than the stand-alone system identification model, it also produced more biologically-plausible filters. We found these results to be consistent for retinal responses to different stimuli and across model architectures. Moreover, our normatively regularized model performed particularly well in predicting responses of direction-of-motion sensitive retinal neurons. In summary, our results support the hypothesis that efficiently encoding environmental inputs can improve system identification models of early visual processing.


2018 ◽  
Vol 6 (1) ◽  
pp. 90-123 ◽  
Author(s):  
Darren Rhodes

Time is a fundamental dimension of human perception, cognition and action, as the processing and cognition of temporal information is essential for everyday activities and survival. Innumerable studies have investigated the perception of time over the last 100 years, but the neural and computational bases for the processing of time remains unknown. Extant models of time perception are discussed before the proposition of a unified model of time perception that relates perceived event timing with perceived duration. The distinction between perceived event timing and perceived duration provides the current for navigating a river of contemporary approaches to time perception. Recent work has advocated a Bayesian approach to time perception. This framework has been applied to both duration and perceived timing, where prior expectations about when a stimulus might occur in the future (prior distribution) are combined with current sensory evidence (likelihood function) in order to generate the perception of temporal properties (posterior distribution). In general, these models predict that the brain uses temporal expectations to bias perception in a way that stimuli are ‘regularized’ i.e. stimuli look more like what has been seen before. As such, the synthesis of perceived timing and duration models is of theoretical importance for the field of timing and time perception.


2009 ◽  
Vol 26 (1) ◽  
pp. 35-49 ◽  
Author(s):  
THORSTEN HANSEN ◽  
KARL R. GEGENFURTNER

AbstractForm vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information.


Author(s):  
Agnieszka Sowa

Sztuka czekania – percepcja czasu w powieści Mogador (2016) Martina Mosebacha Powieść Martina Mosebacha Mogador konfrontuje dwie kultury. Bohater – młody, odnoszący sukcesy pracownik banku z Niemiec – musi spędzić kilka tygodni w Maroku wśród jego mieszkańców. Musi zmierzyć się z obcymi zwyczajami i innym rytmem życia ludzi, którzy wydają się mieć znacznie więcej czasu i nie muszą poddawać się jego presji. W artykule skupiono się na przedstawieniach percepcji czasu (np. w czasie wolnym, w trakcie posiłków czy oczekiwania), która wydaje się jedną z najważniejszych różnic pomiędzy kulturą europejską a marokańską. Artykuł ma na celu opisanie ludzkiej tęsknoty za godnym przeżywaniem czasu, za tzw. slow life, która wydaje się pragnieniem ukrytym pod niepokojem i szybkością współczesnego świata. Art of Waiting – Perception of Time in Martin Mosebach’s Novel Mogador (2016) Martin Mosebach’s novel Mogador confronts two cultures; the protagonist, a young, successful, German bank employee must spend some weeks in Morocco among the locals. He has to deal with foreign customs and another rhythm of life among people who seem to have much more time and don’t have to subject themselves to the pressure of the clock. The article focuses on the depictions of time perception (e.g. during leisure time, meals, waiting, etc.), which seems to be one of the most important differences between them. The article aims to describe the human longing for dignified handling of time, for slow life, which seems to be a yearning hidden under the anxiety and speed of the modern world.


2018 ◽  
Author(s):  
Niru Maheswaranathan ◽  
Lane T. McIntosh ◽  
Hidenori Tanaka ◽  
Satchel Grant ◽  
David B. Kastner ◽  
...  

AbstractUnderstanding how the visual system encodes natural scenes is a fundamental goal of sensory neuroscience. We show here that a three-layer network model predicts the retinal response to natural scenes with an accuracy nearing the fundamental limits of predictability. The model’s internal structure is interpretable, in that model units are highly correlated with interneurons recorded separately and not used to fit the model. We further show the ethological relevance to natural visual processing of a diverse set of phenomena of complex motion encoding, adaptation and predictive coding. Our analysis uncovers a fast timescale of visual processing that is inaccessible directly from experimental data, showing unexpectedly that ganglion cells signal in distinct modes by rapidly (< 0.1 s) switching their selectivity for direction of motion, orientation, location and the sign of intensity. A new approach that decomposes ganglion cell responses into the contribution of interneurons reveals how the latent effects of parallel retinal circuits generate the response to any possible stimulus. These results reveal extremely flexible and rapid dynamics of the retinal code for natural visual stimuli, explaining the need for a large set of interneuron pathways to generate the dynamic neural code for natural scenes.


Author(s):  
N Seijdel ◽  
N Tsakmakidis ◽  
EHF De Haan ◽  
SM Bohte ◽  
HS Scholte

AbstractFeedforward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations (‘routines’) that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Marta Suárez-Pinilla ◽  
Kyriacos Nikiforou ◽  
Zafeirios Fountas ◽  
Anil K. Seth ◽  
Warrick Roseboom

The neural basis of time perception remains unknown. A prominent account is the pacemaker-accumulator model, wherein regular ticks of some physiological or neural pacemaker are read out as time. Putative candidates for the pacemaker have been suggested in physiological processes (heartbeat), or dopaminergic mid-brain neurons, whose activity has been associated with spontaneous blinking. However, such proposals have difficulty accounting for observations that time perception varies systematically with perceptual content. We examined physiological influences on human duration estimates for naturalistic videos between 1–64 seconds using cardiac and eye recordings. Duration estimates were biased by the amount of change in scene content. Contrary to previous claims, heart rate, and blinking were not related to duration estimates. Our results support a recent proposal that tracking change in perceptual classification networks provides a basis for human time perception, and suggest that previous assertions of the importance of physiological factors should be tempered.


2020 ◽  
Author(s):  
Long Tang ◽  
Toshimitsu Takahashi ◽  
Tamami Shimada ◽  
Masayuki Komachi ◽  
Noriko Imanishi ◽  
...  

Abstract The position of any event in time could be in the present, past, or future. This temporal discrimination is vitally important in our daily conversations, but it remains elusive how the human brain distinguishes among the past, present, and future. To address this issue, we searched for neural correlates of presentness, pastness, and futurity, each of which is automatically evoked when we hear sentences such as “it is raining now,” “it rained yesterday,” or “it will rain tomorrow.” Here, we show that sentences that evoked “presentness” activated the bilateral precuneus more strongly than those that evoked “pastness” or “futurity.” Interestingly, this contrast was shared across native speakers of Japanese, English, and Chinese languages, which vary considerably in their verb tense systems. The results suggest that the precuneus serves as a key region that provides the origin (that is, the Now) of our time perception irrespective of differences in tense systems across languages.


Sign in / Sign up

Export Citation Format

Share Document