scholarly journals Cognitive processes behind the shooter bias: Dissecting response bias, motor preparation and information accumulation

2021 ◽  
Author(s):  
Marius Frenken ◽  
Wanja Hemmerich ◽  
David Izydorczyk ◽  
Sophie Elisabeth Scharf ◽  
Roland Imhoff

A rich body of research points to racial biases in so-called police officer dilemma tasks: participants are generally faster and less error-prone to “shoot” (vs. not “shoot”) Black (vs. White) targets. In three experimental (and two supplemental) studies (total N = 914), we aimed at examining the cognitive processes underlying these findings under fully standardized conditions. To be able to dissect a-priori decision bias, biased information processing and motor preparation, we rendered video sequences of virtual avatars that differed in nothing but the tone of their skin. Modeling the data via drift diffusion models revealed that the threat of a social group can be explicitly learned and mapped accordingly on an a-priori response bias within the model (Study 1). Studies 2 and 3 replicated the racial shooter bias as apparent in faster reaction times in stereotype-consistent trials. This, however, appears to result from stereotype-consistent motoric preparations and execution readiness, but not from pre-judicial threat biases. The results have implications especially for automatic stereotypes in the public.

2022 ◽  
Vol 98 ◽  
pp. 104230
Author(s):  
Marius Frenken ◽  
Wanja Hemmerich ◽  
David Izydorczyk ◽  
Sophie Scharf ◽  
Roland Imhoff

Author(s):  
Thomas A. Norton ◽  
Melissa Ruhl ◽  
Tim Armitage ◽  
Brian Matthews ◽  
John Miles

The development of autonomous vehicles (AVs) is advancing quickly in some enclaves around the world. Consequently, AVs exist in the public consciousness, featuring regularly in mainstream media. As the form and function of AVs emerge, the attitudes of potential users become more important. The extent to which the public trusts AV technology and anticipates benefits, will drive consumer willingness to use AVs. Broadly, public attitudes will determine whether AVs can attract public investment in infrastructure and become a feature of the future transport mix or fail to realize the potential their developers assert. As part of UK Autodrive, a program trialing the introduction of AVs in the United Kingdom, researchers conducted focus groups in five UK cities, and a comparison focus group in San Francisco (December 2017 to September 2018) using representative samples (total n = 137). Focus group facilitators guided discussions in three areas considered central to usage decisions: trust in the technology, ownership models, and community benefit. This paper describes findings from a quasi-quantitative study supported with qualitative insights. This research provides three key takeaways centering on trust in the technology and in delivering benefit. First, some participants gain trust through experience and others through evidence. Second, participants had difficulty discriminating between AV developers, indicating a need for industry cooperation. Third, partnerships were found to demonstrate trust, highlighting the need for more and deeper partnerships moving forward. Generally, participants had positive attitudes toward AVs and expect AVs to provide benefits. However, these attitudes and expectations could change as AV development progresses.


1988 ◽  
Vol 18 (1) ◽  
pp. 43-66 ◽  
Author(s):  
Albert Casullo

Empiricist theories of knowledge are attractive for they offer the prospect of a unitary theory of knowledge based on relatively well understood physiological and cognitive processes. Mathematical knowledge, however, has been a traditional stumbling block for such theories. There are three primary features of mathematical knowledge which have led epistemologists to the conclusion that it cannot be accommodated within an empiricist framework: 1) mathematical propositions appear to be immune from empirical disconfirmation; 2) mathematical propositions appear to be known with certainty; and 3) mathematical propositions are necessary. Epistemologists who believe that some nonmathematical propositions, such as logical or ethical propositions, cannot be known a posteriori also typically appeal to the three factors cited above in defending their position. The primary purpose of this paper is to examine whether any of these alleged features of mathematical propositions establishes that knowledge of such propositions cannot be a posteriori.


Author(s):  
D. Egorov

Adam Smith defined economics as “the science of the nature and causes of the wealth of nations” (implicitly appealing – in reference to the “wealth” – to the “value”). Neo-classical theory views it as a science “which studies human behavior in terms of the relationship between the objectives and the limited funds that may have a different use of”. The main reason that turns the neo-classical theory (that serves as the now prevailing economic mainstream) into a tool for manipulation of the public consciousness is the lack of measure (elimination of the “value”). Even though the neo-classical definition of the subject of economics does not contain an explicit rejection of objective measures the reference to “human behavior” inevitably implies methodological subjectivism. This makes it necessary to adopt a principle of equilibrium: if you can not objectively (using a solid measurement) compare different states of the system, we can only postulate the existence of an equilibrium point to which the system tends. Neo-classical postulate of equilibrium can not explain the situation non-equilibrium. As a result, the neo-classical theory fails in matching microeconomics to macroeconomics. Moreover, a denial of the category “value” serves as a theoretical basis and an ideological prerequisite of now flourishing manipulative financial technologies. The author believes in the following two principal definitions: (1) economics is a science that studies the economic system, i.e. a system that creates and recombines value; (2) value is a measure of cost of the object. In our opinion, the value is the information cost measure. It should be added that a disclosure of the nature of this category is not an obligatory prerequisite of its introduction: methodologically, it is quite correct to postulate it a priori. The author concludes that the proposed definitions open the way not only to solve the problem of the measurement in economics, but also to address the issue of harmonizing macro- and microeconomics.


1993 ◽  
Vol 70 (1) ◽  
pp. 431-443 ◽  
Author(s):  
E. M. Bowman ◽  
V. J. Brown ◽  
C. Kertzman ◽  
U. Schwarz ◽  
D. L. Robinson

1. A task was used by Posner (1980) to measure shifts of attention that occurred covertly, in the absence of an eye movement or other orienting response. This paradigm was used here to assess the nature of covert attentional orienting in monkeys to develop an animal model for neurophysiological studies. Shifts of attention were measurable in monkeys and were consistent across a variety of experimental conditions. 2. The paradigm required that monkeys fixate and release a bar at the appearance of a target, which was preceded by a cue. Reaction times to targets that followed peripheral cues at the same location (validly cued) were significantly faster than those that followed cues in the opposite visual field (invalidly cued). This difference was defined as the validity effect, which as in humans, is used as the measure of a covert attentional shift. 3. When the proportion of validly to invalidly cued targets was decreased, no change was seen in the validity effect of the monkeys. This is in contrast to humans, for whom the ratio of validly to invalidly cued targets affected the magnitude of the validity effect. When 80% of the targets were preceded by cues at the same location, the validity effect was greatest. The effect was reversed when the proportions were reversed. From this result, it is concluded that cognitive processes can affect covert orienting to peripheral cues in humans, whereas in trained monkeys, performance was automatic. 4. To test whether cognitive influences on attention could be demonstrated in the monkey, an animal was taught to use symbolic, foveal signals to covertly direct attention. The magnitude of this validity effect was greater than that obtained with peripheral cues. 5. The effects of motivational and perceptual processes were tested. Although overall reaction times could be modified, the facilitating effects of the cues persisted. This constancy across motivational and perceptual levels supports the notion that the monkeys were performing the task in an automatic way, under the exogenous control of peripheral cues. 6. Most visual cuing has been tested with visual landmarks at the locations of cues and targets. These monkeys were trained with such landmarks, and when tested without them, the attentional effect of the cues was nearly abolished. These data suggest that local visual features can be important for covert orienting. 7. To determine the spatial extent of the effect of the cue, monkeys and humans were tested with four cue-target distances (0-60 degrees).(ABSTRACT TRUNCATED AT 400 WORDS)


2021 ◽  
Author(s):  
Milou J.L. van Helvert ◽  
Leonie Oostwoud Wijdenes ◽  
Linda Geerligs ◽  
W. Pieter Medendorp

AbstractWhile beta-band activity during motor planning is known to be modulated by uncertainty about where to act, less is known about its modulations to uncertainty about how to act. To investigate this issue, we recorded oscillatory brain activity with EEG while human participants (n = 17) performed a hand choice reaching task. The reaching hand was either predetermined or of participants’ choice, and the target was close to one of the two hands or at about equal distance from both. To measure neural activity in a motion-artifact-free time window, the location of the upcoming target was cued 1000-1500 ms before the presentation of the target, whereby the cue was valid in 50% of trials. As evidence for motor planning during the cueing phase, behavioral observations showed that the cue affected later hand choice. Furthermore, reaction times were longer in the choice than in the predetermined trials, supporting the notion of a competitive process for hand selection. Modulations of beta-band power over central cortical regions, but not alpha-band or theta-band power, were in line with these observations. During the cueing period, reaches in predetermined trials were preceded by larger decreases in beta-band power than reaches in choice trials. Cue direction did not affect reaction times or beta-band power, which may be due to the cue being invalid in 50% of trials, retaining effector uncertainty during motor planning. Our findings suggest that effector uncertainty, similar to target uncertainty, selectively modulates beta-band power during motor planning.New & NoteworthyWhile reach-related beta-band power in central cortical areas is known to modulate with the number of potential targets, here we show, using a cueing paradigm, that the power in this frequency band, but not in the alpha or theta-band, is also modulated by the uncertainty of which hand to use. This finding supports the notion that multiple possible effector-specific actions can be specified in parallel up to the level of motor preparation.


2021 ◽  
Vol 2 ◽  
Author(s):  
Susan Joslyn ◽  
Sonia Savelli

There is a growing body of evidence that numerical uncertainty expressions can be used by non-experts to improve decision quality. Moreover, there is some evidence that similar advantages extend to graphic expressions of uncertainty. However, visualizing uncertainty introduces challenges as well. Here, we discuss key misunderstandings that may arise from uncertainty visualizations, in particular the evidence that users sometimes fail to realize that the graphic depicts uncertainty. Instead they have a tendency to interpret the image as representing some deterministic quantity. We refer to this as the deterministic construal error. Although there is now growing evidence for the deterministic construal error, few studies are designed to detect it directly because they inform participants upfront that the visualization expresses uncertainty. In a natural setting such cues would be absent, perhaps making the deterministic assumption more likely. Here we discuss the psychological roots of this key but underappreciated misunderstanding as well as possible solutions. This is a critical question because it is now clear that members of the public understand that predictions involve uncertainty and have greater trust when uncertainty is included. Moreover, they can understand and use uncertainty predictions to tailor decisions to their own risk tolerance, as long as they are carefully expressed, taking into account the cognitive processes involved.


2019 ◽  
Vol 67 (3) ◽  
pp. 231-263 ◽  
Author(s):  
Anatol Stefanowitsch

Abstract There is widespread agreement that the so-called ‘Brexit’ – the withdrawal of the United Kingdom from the European Union – is a fundamentally populist project. However, the language of the public face of this project, Prime Minister Theresa May, has not, so far, been studied with respect to populist speech patterns. This paper presents a series of quantitative case studies aimed at closing this research gap. The first study attempts to identify evidence of populist speech patterns by means of a keyword analysis, the second study looks at the phrase the (British) people, the third study at the phrase the will of the people, and the fourth at references to the past and the future. While these are based on a priori hypotheses about populist speech patterns, a fifth case study looks at the verb deliver and the noun deal, which are inductively identified as typical of May’s statements concerning Brexit, and shows how they allow May to construct a populist discourse without taking the role of the populist. All case studies are based on a dedicated corpus of almost 270,000 tokens consisting of speeches and other spoken and written statements by Theresa May.


Author(s):  
Genevieve Sansone ◽  
Geoffrey T. Fong ◽  
Gang Meng ◽  
Lorraine V. Craig ◽  
Steve S. Xu ◽  
...  

Comprehensive smoke-free policies such as those called for by the WHO FCTC are the only way to protect the public effectively from the harms of secondhand smoke (SHS), yet Japan has been slow to implement this important health measure. This study examines baseline levels of smoking and SHS exposure in public places and support for smoking bans in Japan prior to the implementation of the 2018 national smoke-free law. Data are from the International Tobacco Control (ITC) Japan Wave 1 Survey (Feb–Mar 2018), a web survey of adult cigarette smokers, heated tobacco product users, dual users, and non-users (total N = 4684). Measures included prevalence of smoking (whether respondents noticed people smoking inside restaurants and bars at their last visit, and workplaces in the last month), and support for complete smoking bans in these venues. Smoking prevalence in each venue was high overall in 2018 (49% of workplaces, 55% of restaurants, and 83% of bars), even higher than in China, the country with the greatest toll of SHS. Support for complete smoking bans was very high overall (81% for workplaces, 78% for restaurants, and 65% for bars). Non-users were less likely to be exposed to SHS and had higher support for smoking bans than tobacco users. These findings point to the ineffectiveness of partial smoke-free laws in Japan and reinforce the call for comprehensive smoke-free laws, which even smokers would support at higher levels than in many other ITC countries.


2020 ◽  
Vol 11 (1) ◽  
pp. 59-74
Author(s):  
Lambèr Royakkers ◽  
Rinie van Est

The new wave of digitization and the ensuing cybernetic loop lead to the fact that biological, social, and cognitive processes can be understood in terms of information processes and systems, and thus digitally programmed and controlled. Digital control offers society and the individuals in that society a multitude of opportunities, but also brings new social and ethical challenges. Important public values are at stake, closely linked to fundamental and human rights. This paper focuses on the public value of autonomy, and shows that digitization—by analysis and application of data—can have a profound effect on this value in all sorts of aspects in our lives: in our material, biological, and socio-cultural lives. Since the supervision of autonomy is hardly organized, we need to clarify through reflection and joint debate about what kind of control and human qualities we do not want to lose in the digital future.


Sign in / Sign up

Export Citation Format

Share Document