correct rejection
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 12)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 11 (9) ◽  
pp. 1146
Author(s):  
E. Leslie Cameron ◽  
E. P. Köster ◽  
Per Møller

Memory for odors is believed to be longer-lasting than memory for visual stimuli, as is evidenced by flat forgetting curves. However, performance on memory tasks is typically weaker in olfaction than vision. Studies of odor memory that use forced-choice methods confound responses that are a result of a trace memory and responses that can be obtained through process of elimination. Moreover, odor memory is typically measured with common stimuli, which are more familiar and responses may be confounded by verbal memory, and measure memory in intentional learning conditions, which are ecologically questionable. Here we demonstrate the value of using tests of memory in which hit rate and correct rejection rate are evaluated separately (i.e., not using forced-choice methods) and uncommon stimuli are used. This study compared memory for common and uncommon odors and pictures that were learned either intentionally (Exp. 1) or incidentally (Exp. 2) and tested with either a forced-choice or a one-stimulus-at-a-time (“monadic”) recognition task after delays of 15 min, 48 h or 1 week. As expected, memory declined with delay in most conditions, but depended upon the particular measure of memory and was better for pictures than odors and for common than uncommon stimuli. For common odors, hit rates decreased with delay but correct rejection rates remained constant with delay. For common pictures, we found the opposite result, constant hit rates and decreased correct rejection rates. Our results support the ‘misfit theory of conscious olfactory perception’, which highlights the importance of the detection of novelty in olfactory memory and suggests that olfactory memory should be studied using more ecologically valid methods.


Author(s):  
Shayne Loft ◽  
Adella Bhaskara ◽  
Brittany A. Lock ◽  
Michael Skinner ◽  
James Brooks ◽  
...  

Objective Examine the effects of decision risk and automation transparency on the accuracy and timeliness of operator decisions, automation verification rates, and subjective workload. Background Decision aids typically benefit performance, but can provide incorrect advice due to contextual factors, creating the potential for automation disuse or misuse. Decision aids can reduce an operator’s manual problem evaluation, and it can also be strategic for operators to minimize verifying automated advice in order to manage workload. Method Participants assigned the optimal unmanned vehicle to complete missions. A decision aid provided advice but was not always reliable. Two levels of decision aid transparency were manipulated between participants. The risk associated with each decision was manipulated using a financial incentive scheme. Participants could use a calculator to verify automated advice; however, this resulted in a financial penalty. Results For high- compared with low-risk decisions, participants were more likely to reject incorrect automated advice and were more likely to verify automation and reported higher workload. Increased transparency did not lead to more accurate decisions and did not impact workload but decreased automation verification and eliminated the increased decision time associated with high decision risk. Conclusion Increased automation transparency was beneficial in that it decreased automation verification and decreased decision time. The increased workload and automation verification for high-risk missions is not necessarily problematic given the improved automation correct rejection rate. Application The findings have potential application to the design of interfaces to improve human–automation teaming, and for anticipating the impact of decision risk on operator behavior.


Author(s):  
Bao-Lien Hung ◽  
Li-Jung Chen ◽  
Yi-Ying Chen ◽  
Jhih-Bang Ou ◽  
Shih-Hua Fang

Abstract Background Nicotine is beneficial to mood, arousal and cognition in humans. Due to the importance of cognitive functioning for archery athletes, we investigated the effects of nicotine supplementation on the cognitive abilities, heart rate variability (HRV), and sport performance of professional archers. Methods Eleven college archers were recruited and given 2 mg of nicotine supplementation (NIC group) and placebo (PLA group) in a crossover design. Results The results showed that at 30 min after the intake of nicotine gum, the “correct rejection” time in the NIC group was significantly lower than that of the PLA group (7.29 ± 0.87 vs. 8.23 ± 0.98 msec, p < 0.05). In addition, the NIC group completed the grooved pegboard test in a shorter time than the PLA group (48.76 ± 3.18 vs. 53.41 ± 4.05 s, p < 0.05), whereas motor reaction times were not different between the two groups. Saliva α-amylase activity was significantly lower after nicotine supplementation (p < 0.01) but increased immediately after the archery test in the NIC group (p < 0.05). In addition, nicotine supplementation significantly decreased HRV and increased the archery score (290.58 ± 10.09 vs. 298.05 ± 8.56, p < 0.01). Conclusions Nicotine enhances the performance of archery athletes by increasing cognitive function and stimulating the sympathetic adrenergic system.


2021 ◽  
Author(s):  
Moritz Heene ◽  
Michael Maraun ◽  
Nadine J. Glushko ◽  
Sunthud Pornprasertmanit

To provide researchers with a means of assessing the fit of the structural component of structural equation models, structural fit indices- modifications of the composite fit indices, RMSEA, SRMR, and CFI- have recently been developed. We investigated the performance of four of these structural fit indices- RMSEA-P, RMSEAs, SRMRs, and CFIs-, when paired with widely accepted cutoff values, in the service of detecting structural misspecification. In particular, by way of simulation study, for each of seven fit indices- 3 composite and 4 structural-, and the traditional chi-square test of perfect composite fit, we estimated the following rates: a) Type I error rate (i.e., the probability of (incorrect) rejection of a correctly specified structural component), under each of four degrees of misspecification in the measurement component; and b) Power (i.e., the probability of (correct) rejection of an incorrectly specified structural model), under each condition formed of the pairing of one of three degrees of structural misspecification with one of four degrees of measurement component misspecification. In addition to sample size, the impacts of two model features, incidental to model misspecification- number of manifest variables per latent variable and magnitude of factor loading- were investigated. The results suggested that, although the structural fit indices performed relatively better than the composite fit indices, none of the GFICV pairings was capable of delivering an entirely satisfactory Type I error rate/Power balance, [RMSEA-S,.05] failing entirely in this regard. Of the remaining pairings; a) RMSEA-P and CFIs suffered from a severely inflated Type I error rate; b) despite the fact that they were designed to pick up on structural features of candidate models, all pairings- and especially, RMSEA-P and CFIs- manifested sensitivities to model features, incidental to structural misspecification; and c) although, in the main, behaving in a sensible fashion, SRMRS was only sensitive to structural misspecification when it occurred at a relatively high degree.


2020 ◽  
Vol 24 (4) ◽  
pp. 657-668
Author(s):  
Petar Bojanic

The articles intention is to construct a possible minimal response to violence, that is, to describe what would be justified (necessary or legitimate) противонасилие (counter-violence). This argument is built on reviving several important philosophical texts in Russian of the first half of the twentieth century as well as on going beyond that historical moment. Starting with the reconstruction of Tolstoys criticism of any use of violence, it is then shown that, paradoxically, resistance to Tolstoys or pseudo-Tolstoys teachings ends up incorporating Tolstoys thematization of counter-violence into various theories, which sought to legitimate the use of force. In particular, Tolstoys discovery of a force, which, on the one hand, is not grounded in violence and, on the other hand, which is capable of countering violence, becomes fundamental in reasoning about the just use of force. The connection is made between Tolstoy and Petar II Petrović Njego, who also thematizes the use of force in Christian perspective. In his view, justice, blessed by the Creators hand, has the capacity to protect from violent force. Any living thing defends itself from what endangers it by means Creator bestowed it with. Living force and protective use of force are conceptually linked in Njegos reasoning. Thus, only protective force can defeat aggressive force. This is shown to be Njegos contribution to the Orthodox Christian discourse on violence. If a force can be counter-violent, the next step in our argument would be to search for a protocol that should have universal validity, that is, it has to be valid for all conflicting sides, The protocol of counter-violence requires that, firstly, it is a response to violence; secondly, it interrupts violence and forestalls any possible future violence (it is the last violence); thirdly, it is subject to verification, it addresses those who are a priori against any response to violence (which usually refers to various forms of Tolstoyism). Finally, it is shown that state power does not create law, but it is being right that makes law or gives life to social order, and thereby can authorize the use of force. This is the innovation in the histories of justification of force, absent in the West. Aggressive violence can necessarily be opposed only in the way that implies the possibility of constituting law and order.


Author(s):  
Sanne Kellij ◽  
Johannes Fahrenfort ◽  
Hakwan Lau ◽  
Megan A. K. Peters ◽  
Brian Odegaard

AbstractDetection failures in perceptual tasks can result from different causes: sometimes we may fail to see something because perceptual information is noisy or degraded, and sometimes we may fail to see something due to the limited capacity of attention. Previous work indicates that metacognitive capacities for detection failures may differ depending on the specific stimulus visibility manipulation employed. In this investigation, we measured metacognition while matching performance in two visibility manipulations: phase-scrambling and the attentional blink. As in previous work, metacognitive asymmetries emerged: despite matched type 1 performance, metacognitive ability (measured by area under the ROC curve) for reporting stimulus absence was higher in the attentional blink condition, which was mainly driven by metacognitive ability in correct rejection trials. We performed Signal Detection Theoretic (SDT) modeling of the results, showing that differences in metacognition under equal type I performance can be explained when the variance of the signal and noise distributions are unequal. Specifically, the present study suggests that phase scrambling signal trials have a wider distribution (more variability) than attentional blink signal trials, leading to a larger area under the ROC curve for attentional blink trials where subjects reported stimulus absence. These results provide a theoretical basis for the origin of metacognitive differences on trials where subjects report stimulus absence, and may also explain previous findings where the absence of evidence during detection tasks results in lower metacognitive performance when compared to categorization.


2020 ◽  
Vol 11 (2) ◽  
pp. 27-30
Author(s):  
Hirofumi Hashimoto ◽  
Kaede Maeda ◽  
Sayaka Tomida ◽  
Shigehito Tanida

The current study sought to examine the association between the level of general trust and the judgment accuracy of others’ cooperativeness. Based on data collected from 107 female first-year undergraduate students, we demonstrated that a high level of general trust was associated with a high level of judgment accuracy of group members’ cooperation in a social dilemma game. Additional analysis suggested that the association was present even when the judgment accuracy was divided into hit rate (i.e., the rate of correct judgment on the cooperator as a cooperative) and correct rejection rate (i.e., the rate of correct judgment on the non-cooperator as a non-cooperative) by controlling the participants’ judgment bias, Big Five personality traits, and the proportion of cooperators in the group. These results are in accordance with previous studies insofar as they suggest that high trusters are more skilled at discerning others’ trustworthiness. The current study adds to the evidence that high trusters have increased cognitive skills and supports Yamagishi’s emancipation theory of trust.


2020 ◽  
Vol 32 (2) ◽  
pp. 353-366 ◽  
Author(s):  
Alexis D. J. Makin ◽  
Giulia Rampone ◽  
Amie Morris ◽  
Marco Bertamini

The brain can organize elements into perceptually meaningful gestalts. Visual symmetry is a useful tool to study gestalt formation, and we know that there are symmetry-sensitive regions in the extrastriate cortex. However, it is unclear whether symmetrical gestalt formation happens automatically, whatever the participant's current task is. Does the visual brain always organize and interpret the retinal image when possible, or only when necessary? To test this, we recorded an ERP called the sustained posterior negativity (SPN). SPN amplitude increases with the proportion of symmetry in symmetry + noise displays. We compared the SPN across five tasks with different cognitive and perceptual demands. Contrary to our predictions, the SPN was the same across four of the five tasks but selectively enhanced during active regularity discrimination. Furthermore, during regularity discrimination, the SPN was present on hit trials and false alarm trials but absent on miss and correct rejection trials. We conclude that gestalt formation is automatic and task-independent, although it occasionally fails on miss trials. However, it can be enhanced by attention to visual regularity.


2019 ◽  
Author(s):  
Maryam Ziaei ◽  
Mohammad Reza Bonyadi ◽  
David C. Reutens

AbstractReasoning requires initial encoding of the semantic association between premises or assumptions, retrieval of these semantic associations from memory, and recombination of information to draw a logical conclusion. Currently-held beliefs can interfere with the content of the assumptions if not congruent and inhibited. This study aimed to investigate the role of the hippocampus and hippocampal networks during logical reasoning tasks in which the congruence between currently-held beliefs and assumptions varies. Participants of younger and older age completed a series of syllogistic reasoning tasks in which two premises and one conclusion were presented and they were required to decide if the conclusion logically followed the premises. The belief load of premises was manipulated to be either congruent or incongruent with currently-held beliefs. Our whole-brain results showed that older adults recruited the hippocampus during the premise integration stage more than their younger counterparts. Functional connectivity using a hippocampal seed revealed that older, but not younger, adults recruited a hippocampal network that included anterior cingulate and inferior frontal regions when premises were believable. Importantly, this network contributed to better performance in believable inferences, only in older adults group. Further analyses suggested that, in older adults group, the integrity of the left cingulum bundle was associated with the higher correct rejection of believable premises more than unbelievable ones. Using multimodal imaging, this study highlights the importance of the hippocampus during premise integration and supports the compensatory role of the hippocampal network during a logical reasoning task among older adults.


Neuroreport ◽  
2019 ◽  
Vol 30 (12) ◽  
pp. 847-851
Author(s):  
Ying Chen ◽  
Hailu Wang ◽  
Qin Zhang ◽  
Lixia Cui

Sign in / Sign up

Export Citation Format

Share Document