scholarly journals The Predation Game: Does dividing attention affect patterns of human foraging?

2020 ◽  
Author(s):  
Ian Michael Thornton ◽  
Jérôme Tagu ◽  
Sunčica Zdravković ◽  
Arni Kristjansson

Attention is known to play an important role in shaping the behaviour of both human and animal foragers. Here, in two experiments, we built on our previous interactive tasks to create an online foraging game for studying divided attention in human participants exposed to the (simulated) risk of predation. Participants used a “sheep” object to collect items from different target categories randomly distributed across the display. Each trial also contained “wolf” objects, whose movement was inspired by classic studies of multiple object tracking. For “hunted” participants, collision with any wolf terminated the trial, making the need to monitor and avoid the predators crucial to success. For “distracted” participants, the wolf objects did not interact with the sheep, and could effectively be ignored. In Experiment 1, we used an established Feature/Conjunction manipulation to vary the difficulty of target selection. In Experiment 2, we varied the value and the prevalence of target items to examine potential trade-offs between risk and reward. In both experiments, we found very clear differences between the foraging patterns of hunted versus distracted participants. We were also able to replicate basic foraging patterns associated with target complexity and reward, respectively. Unexpectedly, hunted participants did not show a tendency to restricting their search to a single category, the hallmark of attention limited foraging. Rather, they were more likely to select from all available categories, compared to the distracted participants. Such behaviour is consistent with the idea that risk of predation in our task modulated levels of alertness/arousal, counteracting the costs of having to both select targets and monitor for wolves. While the effects of phasic changes in alertness and arousal are well captured in standard capacity models of attention and are also central to recent attempts to explain individual differences in human performance they do not as yet play a major role in the attention models applied to either human or animal foraging.

2018 ◽  
Author(s):  
Valter Prpic ◽  
Isabelle Kniestedt ◽  
Elizabeth Camilleri ◽  
Marcello Gómez Maureira ◽  
Arni Kristjansson ◽  
...  

Traditional search tasks have taught us much about vision and attention. Recently, several groups have begun to use multiple-target search to explore more complex and temporally extended “foraging” behaviour. Many of these new foraging tasks, however, maintain the simplified 2D displays and response demands associated with traditional, single-target visual search. In this respect, they may fail to capture important aspects of real-world search or foraging behaviour. In the current paper, we present a serious game for mobile platforms in which human participants play the role of an animal foraging for food in a simulated 3D environment. Game settings can be adjusted, so that, for example, custom target and distractor items can be uploaded, and task parameters, such as the number of target categories or target/distractor ratio are all easy to modify. We demonstrate how the app can be used to address specific research questions by conducting two human foraging experiments. Our results indicate that in this 3D environment, a standard feature/conjunction manipulation does not lead to a reduction in foraging runs, as it is known to do in simple, 2D foraging tasks. Differences in foraging behaviour are discussed in terms of environment structure, task demands and attentional constraints.


2019 ◽  
Author(s):  
Ian Michael Thornton ◽  
Claudio de’Sperati ◽  
Arni Kristjansson

In a previous series of papers, we have used an iPad task to explore how human participants “forage” through static displays containing multiple targets from two categories. A main finding was that when demands on attention were increased, foraging patterns tended to shift from random category selection to exhaustive category selection. In the current work, we created displays on a vertically oriented touch-screen containing identical target and distractor categories that could either be in motion or at rest. In separate blocks, participants selected target items using different modalities, specifically: a) mouse b) touchscreen or c) infrared hand tracker. Selected targets were always cancelled via a common button press response. Our interest was whether foraging patterns would be the same as those seen with our iPad task. Although the different selection modalities varied considerably in terms of rated familiarity and difficulty of use, they had only a minimal effect on patterns of foraging. There was a very consistent reduction in the number of category switches when attentional load was increased. However, the tendency to use exhaustive runs during high attention conditions was much reduced compared to the iPad task, particularly with dynamic displays. We suggest that this pattern is a consequence of generally slowed response times compared to the iPad task. These findings indicate that in addition to capacity limits, temporal constraints are likely to be an important determinant of foraging patterns in humans. We introduce the term foraging tempo to capture this latter notion and to emphasize the probable role played by the overall pace of the regular, repetitive selections required during multi-target search tasks.


Author(s):  
Richard Steinberg ◽  
Raytheon Company ◽  
Alice Diggs ◽  
Raytheon Company ◽  
Jade Driggs

Verification and validation (V&V) for human performance models (HPMs) can be likened to building a house with no bricks, since it is difficult to obtain metrics to validate a model when the system is still in development. HPMs are effective for performing trade-offs between the human system designs factors including number of operators needed, the role of automated tasks versus operator tasks, and member task responsibilities required to operate a system. On a recent government contract, our team used a human performance model to provide additional analysis beyond traditional trade studies. Our team verified the contractually mandated staff size for using the system. This task demanded that the model have sufficient fidelity to provide information for high confidence staffing decisions. It required a method for verifying and validating the model and its results to ensure that it accurately reflected the real world. The situation caused a dilemma because there was no actual system to gather real data to use to validate the model. It is a challenge to validate human performance models, since they support design decisions prior to system. For example, crew models are typically inform the design, staffing needs, and the requirements for each operator’s user interface prior to development. This paper discusses a successful case study for how our team met the V&V challenges with the US Air Force model accreditation authority and successfully accredited our human performance model with enough fidelity for requirements testing on an Air Force Command and Control program.


2005 ◽  
Vol 10 (3) ◽  
pp. 175-186 ◽  
Author(s):  
Carol Sansone ◽  
Dustin B. Thoman

Abstract. Typically, models of self-regulation include motivation in terms of goals. Motivation is proposed to fluctuate according to how much individuals value goals and expect to attain them. Missing from these models is the motivation that arises from the process of goal-pursuit. We suggest that an important aspect of self-regulation is monitoring and regulating our motivation, not just our progress toward goals. Although we can regulate motivation by enhancing the value or expectancy of attaining the outcome, we suggest that regulating the interest experience can be just as, if not more, powerful. We first present our model, which integrates self-regulation of interest within the goal-striving process. We then briefly review existing evidence, distinguishing between two broad classes of potential interest-enhancing strategies: intrapersonal and interpersonal. For each class of strategies we note what is known about developmental and individual differences in whether and how these kinds of strategies are used. We also discuss implications, including the potential trade-offs between regulating interest and performance, and how recognizing the role of the interest experience may shed new light on earlier research in domains such as close relationships, psychiatric disorders, and females' choice to drop out of math and science.


Author(s):  
Alan F. Stokes ◽  
James A. Pharmer ◽  
Aysenil Belger

Attentional biases in stressed or overworked radar operators (airborne or in Combat Information Centers, etc.) may have important operational implications. This study examined the effects of workload and non-workload-related stress on salience bias in a screen-based target selection and engagement task. Results in the control condition confirmed that appreciable baseline salience bias existed. Moreover, in the non-task-related stress condition (noise/anxiety), a significant increase in salience bias was observed. Elevated workload, in contrast, was associated with no significant changes in salience bias. Overall, the results showed stable individual differences in salience bias and suggested that non-workload related stress influenced ‘high bias’ individuals proportionately more than ‘low bias’ individuals-an outcome with potential implications for selection. Subjects were also significantly biased toward the left hemispace, a powerful effect that remained even after the experiment was repeated using subjects’ left instead of right hands.


Paleobiology ◽  
1979 ◽  
Vol 5 (2) ◽  
pp. 107-125 ◽  
Author(s):  
Jennifer A. Kitchell

The foraging paradigm of trace fossil theory has historically accorded random behavior to non-food-limited deposit-feeders and non-random behavior to food-limited feeders. A series of randomness measures derived from empirical modeling, simulation modeling, stochastic modeling and probability theory applied to foraging patterns observed in deep-sea bottom photographs from the Arctic and Antarctic yielded a behavioral continuum of increasing non-randomness. A linear regression of trace positions along the continuum to bathymetric data did not substantiate the optimal foraging efficiency-depth dependence model of trace fossil theory, except that all traces exhibited a greater optimization than that of simulated random foraging. It is hypothesized that optimization as evidenced by non-random foraging strategies represents maximization of the cost/benefit ratio of resource exploitation to risk of predation and that individual foraging patterns reflect an exploration response to the morphometry of a patchily distributed food resource. Differential predation and competition may account for the co-occurrence of random and non-random strategies within the same bathymetric zone.


2012 ◽  
Vol 24 (7) ◽  
pp. 1806-1821
Author(s):  
Bernard M. C. Stienen ◽  
Konrad Schindler ◽  
Beatrice de Gelder

Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.


1987 ◽  
Vol 65 (4) ◽  
pp. 803-811 ◽  
Author(s):  
Lawrence M. Dill

It is virtually impossible to predict the next 25 years of research in aquatic ecology and behaviour with any accuracy. However, by identifying those areas that are the current frontiers of the discipline it is possible to guess at the most likely research developments over the next decade. From my own biased perspective, the research programme most likely to be productive in the near future is that of behavioural ecology, which studies, among other things, animal decision making in an ecological context. I focus on situations in which animals must make decisions under conflicting objectives, e.g., to simultaneously maximize net energy intake while minimizing risk of predation. New data on guppies (Poecilia reticulata) are presented and the recent literature is reviewed to support the notion that animals in such situations behave so as to maximize fitness. Habitat choices, ontogenetic habitat shifts, and the phenomena of vertical migration and downstream drift are beginning to be considered in this general evolutionary framework, with novel results, and this trend will undoubtedly continue. Extension of the logic of trade-offs to the community level leads to a number of new insights about the processes that shape community structure, and affirms the need for aquatic ecologists of the future to have a thorough understanding of animal behaviour, and a working knowledge of such tools of evolutionary ecology as optimality reasoning and game theory.


2015 ◽  
Vol 282 (1814) ◽  
pp. 20151050 ◽  
Author(s):  
Nathan R. Senner ◽  
Jesse R. Conklin ◽  
Theunis Piersma

Phenotypic differences among individuals can arise during any stage of life. Although several distinct processes underlying individual differences have been defined and studied (e.g. parental effects, senescence), we lack an explicit, unified perspective for understanding how these processes contribute separately and synergistically to observed variation in functional traits. We propose a conceptual framework based on a developmental view of life-history variation, linking each ontogenetic stage with the types of individual differences originating during that period. In our view, the salient differences among these types are encapsulated by three key criteria: timing of onset, when fitness consequences are realized, and potential for reversibility. To fill a critical gap in this framework, we formulate a new term to refer to individual differences generated during adulthood—reversible state effects. We define these as ‘reversible changes in a functional trait resulting from life-history trade-offs during adulthood that affect fitness’, highlighting how the adult phenotype can be repeatedly altered in response to environmental variation. Defining individual differences in terms of trade-offs allows explicit predictions regarding when and where fitness consequences should be expected. Moreover, viewing individual differences in a developmental context highlights how different processes can work in concert to shape phenotype and fitness, and lays a foundation for research linking individual differences to ecological and evolutionary theory.


2010 ◽  
Vol 65 (9) ◽  
pp. 930-932 ◽  
Author(s):  
Carin Perilloux ◽  
David M. G. Lewis ◽  
Cari D. Goetz ◽  
Diana S. Fleischman ◽  
Judith A. Easton ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document