scholarly journals A computational model for task inference in visual search

2013 ◽  
Vol 13 (3) ◽  
pp. 29-29 ◽  
Author(s):  
A. Haji-Abolhassani ◽  
J. J. Clark
2020 ◽  
Author(s):  
Julian Jara-Ettinger ◽  
Paula Rubio-Fernandez

A foundational assumption of human communication is that speakers ought to say as much as necessary, but no more. How speakers determine what is necessary in a given context, however, is unclear. In studies of referential communication, this expectation is often formalized as the idea that speakers should construct reference by selecting the shortest, sufficiently informative, description. Here we propose that reference production is, instead, a process whereby speakers adopt listeners’ perspectives to facilitate their visual search, without concern for utterance length. We show that a computational model of our proposal predicts graded acceptability judgments with quantitative accuracy, systematically outperforming brevity models. Our model also explains crosslinguistic differences in speakers’ propensity to over-specify in different visual contexts. Our findings suggest that reference production is best understood as driven by a cooperative goal to help the listener understand the intended message, rather than by an egocentric effort to minimize utterance length.


2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Alejandro Lleras ◽  
Zhiyuan Wang ◽  
Anna Madison ◽  
Simona Buetti

Recently, Wang, Buetti and Lleras (2017) developed an equation to predict search performance in heterogeneous visual search scenes (i.e., multiple types of non-target objects simultaneously present) based on parameters observed when participants perform search in homogeneous scenes (i.e., when all non-target objects are identical to one another). The equation was based on a computational model where every item in the display is processed with unlimited capacity and independently of one another, with the goal of determining whether the item is likely to be a target or not. The model was tested in two experiments using real-world objects. Here, we extend those findings by testing the predictive power of the equation to simpler objects. Further, we compare the model’s performance under two stimulus arrangements: spatially-intermixed (items randomly placed around the scene) and spatially-segregated displays (identical items presented near each other). This comparison allowed us to isolate and quantify the facilitatory effect of processing displays that contain identical items (homogeneity facilitation), a factor that improves performance in visual search above-and-beyond target-distractor dissimilarity. The results suggest that homogeneity facilitation effects in search arise from local item-to-item interaction (rather than by rejecting items as “groups”) and that the strength of those interactions might be determined by stimulus complexity (with simpler stimuli producing stronger interactions and thus, stronger homogeneity facilitation effects).


2021 ◽  
Vol 11 (2) ◽  
pp. 1-25
Author(s):  
Moritz Spiller ◽  
Ying-Hsang Liu ◽  
Md Zakir Hossain ◽  
Tom Gedeon ◽  
Julia Geissler ◽  
...  

Information visualizations are an efficient means to support the users in understanding large amounts of complex, interconnected data; user comprehension, however, depends on individual factors such as their cognitive abilities. The research literature provides evidence that user-adaptive information visualizations positively impact the users’ performance in visualization tasks. This study attempts to contribute toward the development of a computational model to predict the users’ success in visual search tasks from eye gaze data and thereby drive such user-adaptive systems. State-of-the-art deep learning models for time series classification have been trained on sequential eye gaze data obtained from 40 study participants’ interaction with a circular and an organizational graph. The results suggest that such models yield higher accuracy than a baseline classifier and previously used models for this purpose. In particular, a Multivariate Long Short Term Memory Fully Convolutional Network shows encouraging performance for its use in online user-adaptive systems. Given this finding, such a computational model can infer the users’ need for support during interaction with a graph and trigger appropriate interventions in user-adaptive information visualization systems. This facilitates the design of such systems since further interaction data like mouse clicks is not required.


Author(s):  
Michelle A. Vincow ◽  
Christopher D. Wickens

Subjects viewed a series of alphanumeric tables containing information regarding the attributes (cost, amount, etc.) of different objects (utilities such as gas and electricity). They answered questions that required them to locate specific pieces of information in the table, perform simple integration between pieces, or complex integration (division, multiplication), and information for questions was either located within a table panel (close separation) or between panels (distant separation). The table was either organized by objects within attributes, or attributes within objects. Table organization had no effect on response time or accuracy. However, accuracy suffered with increased separation, but only for the complex integration questions, a finding that implicates the interference between visual search and the working memory demands of information integration. A computational model of the mental operations required for task performance accounted for 69% of the variance in response time, and provides a useful basis for developing more elaborate models of display layout.


2021 ◽  
Vol 17 (12) ◽  
pp. e1009662
Author(s):  
Michael R. Traner ◽  
Ethan S. Bromberg-Martin ◽  
Ilya E. Monosov

Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.


2017 ◽  
Vol 117 (1) ◽  
pp. 79-92 ◽  
Author(s):  
Tarkeshwar Singh ◽  
Julius Fridriksson ◽  
Christopher M. Perry ◽  
Sarah C. Tryon ◽  
Angela Ross ◽  
...  

Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage.


2003 ◽  
Vol 27 (2) ◽  
pp. 299-312 ◽  
Author(s):  
Marc Pomplun ◽  
Eyal M. Reingold ◽  
Jiye Shen

Sign in / Sign up

Export Citation Format

Share Document