scholarly journals Prediction and preview strongly affect reading times but not skipping during natural reading

2021 ◽  
Author(s):  
Micha Heilbron ◽  
Jorie van Haren ◽  
Peter Hagoort ◽  
Floris P de Lange

In a typical text, readers look much longer at some words than at others and fixate some words multiple times, while skipping others altogether. Historically, researchers explained this variation via low-level visual or oculomotor factors, but today it is primarily explained via cognitive factors, such as how well words can be predicted from context or discerned from parafoveal preview. While the existence of these effects has been well established in experiments, the relative importance of prediction, preview and low-level factors for eye movement variation in natural reading is unclear. Here, we address this question using a deep neural network and Bayesian ideal observer to model linguistic prediction and parafoveal preview from moment to moment in natural reading (n=104, 1.5 million words). Strikingly, neither prediction nor preview was important for explaining word skipping - the vast majority of skipping was explained by a simple oculomotor model. For reading times, by contrast, we found clear but independent contributions of both prediction and preview, and effect sizes matching those from controlled experiments. Together, these results challenge dominant models of eye movements in reading by showing that linguistic prediction and parafoveal preview are not important determinants of word skipping.

Author(s):  
Seung-Geon Lee ◽  
Jaedeok Kim ◽  
Hyun-Joo Jung ◽  
Yoonsuck Choe

Estimating the relative importance of each sample in a training set has important practical and theoretical value, such as in importance sampling or curriculum learning. This kind of focus on individual samples invokes the concept of samplewise learnability: How easy is it to correctly learn each sample (cf. PAC learnability)? In this paper, we approach the sample-wise learnability problem within a deep learning context. We propose a measure of the learnability of a sample with a given deep neural network (DNN) model. The basic idea is to train the given model on the training set, and for each sample, aggregate the hits and misses over the entire training epochs. Our experiments show that the samplewise learnability measure collected this way is highly linearly correlated across different DNN models (ResNet-20, VGG-16, and MobileNet), suggesting that such a measure can provide deep general insights on the data’s properties. We expect our method to help develop better curricula for training, and help us better understand the data itself.


2020 ◽  
Vol 77 (12) ◽  
pp. 4089-4107
Author(s):  
Jannick Fischer ◽  
Johannes M. L. Dahl

AbstractIn the recent literature, the conception has emerged that supercell tornado potential may mostly depend on the strength of the low-level updraft, with more than sufficient subtornadic vertical vorticity being assumed to be present in the outflow. In this study, we use highly idealized simulations with heat sinks and sources to conduct controlled experiments, changing the cold pool or low-level updraft character independently. Multiple, time-dependent heat sinks are employed to produce a realistic near-ground cold pool structure. It is shown that both the cold pool and updraft strength actively contribute to the tornado potential. Furthermore, there is a sharp transition between tornadic and nontornadic cases, indicating a bifurcation between these two regimes triggered by small changes in the heat source or sink magnitude. Moreover, larger updraft strength, updraft width, and cold pool deficit do not necessarily result in a stronger maximum near-ground vertical vorticity. However, a stronger updraft or cold pool can both drastically reduce the time it takes for the first vortex to form.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-5
Author(s):  
Huafeng Chen ◽  
Maosheng Zhang ◽  
Zhengming Gao ◽  
Yunhong Zhao

Current methods of chaos-based action recognition in videos are limited to the artificial feature causing the low recognition accuracy. In this paper, we improve ChaosNet to the deep neural network and apply it to action recognition. First, we extend ChaosNet to deep ChaosNet for extracting action features. Then, we send the features to the low-level LSTM encoder and high-level LSTM encoder for obtaining low-level coding output and high-level coding results, respectively. The agent is a behavior recognizer for producing recognition results. The manager is a hidden layer, responsible for giving behavioral segmentation targets at the high level. Our experiments are executed on two standard action datasets: UCF101 and HMDB51. The experimental results show that the proposed algorithm outperforms the state of the art.


2020 ◽  
Author(s):  
Andrew Francl ◽  
Josh H. McDermott

AbstractMammals localize sounds using information from their two ears. Localization in real-world conditions is challenging, as echoes provide erroneous information, and noises mask parts of target sounds. To better understand real-world localization we equipped a deep neural network with human ears and trained it to localize sounds in a virtual environment. The resulting model localized accurately in realistic conditions with noise and reverberation, outperforming alternative systems that lacked human ears. In simulated experiments, the network exhibited many features of human spatial hearing: sensitivity to monaural spectral cues and interaural time and level differences, integration across frequency, and biases for sound onsets. But when trained in unnatural environments without either reverberation, noise, or natural sounds, these performance characteristics deviated from those of humans. The results show how biological hearing is adapted to the challenges of real-world environments and illustrate how artificial neural networks can extend traditional ideal observer models to real-world domains.


Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Sign in / Sign up

Export Citation Format

Share Document