scholarly journals The effects of target discriminability and criterion placement on accuracy rates in sequential and simultaneous target-present lineups

2011 ◽  
Vol 17 (7) ◽  
pp. 587-610
Author(s):  
Heather Flowe ◽  
Anneka Bessemer
2021 ◽  
Vol 9 (1) ◽  
pp. 12
Author(s):  
Ming D. Lim ◽  
Damian P. Birney

Emotional intelligence (EI) refers to a set of competencies to process, understand, and reason with affective information. Recent studies suggest ability measures of experiential and strategic EI differentially predict performance on non-emotional and emotionally laden tasks. To explore cognitive processes underlying these abilities further, we varied the affective context of a traditional letter-based n-back working-memory task. In study 1, participants completed 0-, 2-, and 3-back tasks with flanking distractors that were either emotional (fearful or happy faces) or non-emotional (shapes or letters stimuli). Strategic EI, but not experiential EI, significantly influenced participants’ accuracy across all n-back levels, irrespective of flanker type. In Study 2, participants completed 1-, 2-, and 3-back levels. Experiential EI was positively associated with response times for emotional flankers at the 1-back level but not other levels or flanker types, suggesting those higher in experiential EI reacted slower on low-load trials with affective context. In Study 3, flankers were asynchronously presented either 300 ms or 1000 ms before probes. Results mirrored Study 1 for accuracy rates and Study 2 for response times. Our findings (a) provide experimental evidence for the distinctness of experiential and strategic EI and (b) suggest that each are related to different aspects of cognitive processes underlying working memory.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1388
Author(s):  
Sk Mahmudul Hassan ◽  
Arnab Kumar Maji ◽  
Michał Jasiński ◽  
Zbigniew Leonowicz ◽  
Elżbieta Jasińska

The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of machine vision. Standard CNN models require a large number of parameters and higher computation cost. In this paper, we replaced standard convolution with depth=separable convolution, which reduces the parameter number and computation cost. The implemented models were trained with an open dataset consisting of 14 different plant species, and 38 different categorical disease classes and healthy plant leaves. To evaluate the performance of the models, different parameters such as batch size, dropout, and different numbers of epochs were incorporated. The implemented models achieved a disease-classification accuracy rates of 98.42%, 99.11%, 97.02%, and 99.56% using InceptionV3, InceptionResNetV2, MobileNetV2, and EfficientNetB0, respectively, which were greater than that of traditional handcrafted-feature-based approaches. In comparison with other deep-learning models, the implemented model achieved better performance in terms of accuracy and it required less training time. Moreover, the MobileNetV2 architecture is compatible with mobile devices using the optimized parameter. The accuracy results in the identification of diseases showed that the deep CNN model is promising and can greatly impact the efficient identification of the diseases, and may have potential in the detection of diseases in real-time agricultural systems.


Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1485
Author(s):  
Kaidong Lei ◽  
Chao Zong ◽  
Xiaodong Du ◽  
Guanghui Teng ◽  
Feiqi Feng

This study proposes a method and device for the intelligent mobile monitoring of oestrus on a sow farm, applied in the field of sow production. A bionic boar model that imitates the sounds, smells, and touch of real boars was built to detect the oestrus of sows after weaning. Machine vision technology was used to identify the interactive behaviour between empty sows and bionic boars and to establish deep belief network (DBN), sparse autoencoder (SAE), and support vector machine (SVM) models, and the resulting recognition accuracy rates were 96.12%, 98.25%, and 90.00%, respectively. The interaction times and frequencies between the sow and the bionic boar and the static behaviours of both ears during heat were further analysed. The results show that there is a strong correlation between the duration of contact between the oestrus sow and the bionic boar and the static behaviours of both ears. The average contact duration between the sows in oestrus and the bionic boars was 29.7 s/3 min, and the average duration in which the ears of the oestrus sows remained static was 41.3 s/3 min. The interactions between the sow and the bionic boar were used as the basis for judging the sow’s oestrus states. In contrast with the methods of other studies, the proposed innovative design for recyclable bionic boars can be used to check emotions, and machine vision technology can be used to quickly identify oestrus behaviours. This approach can more accurately obtain the oestrus duration of a sow and provide a scientific reference for a sow’s conception time.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Hakan Gunduz

AbstractIn this study, the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based, deep-learning (LSTM) and ensemble learning (LightGBM) models. These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics. While the first experiments directly used the own stock features as the model inputs, the second experiments utilized reduced stock features through Variational AutoEncoders (VAE). In the last experiments, in order to grasp the effects of the other banking stocks on individual stock performance, the features belonging to other stocks were also given as inputs to our models. While combining other stock features was done for both own (named as allstock_own) and VAE-reduced (named as allstock_VAE) stock features, the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination. As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model, the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675. Although the classification results achieved with both feature types was close, allstock_VAE achieved these results using nearly 16.67% less features compared to allstock_own. When all experimental results were examined, it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features. It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


2013 ◽  
Vol 28 (3) ◽  
pp. 802-814 ◽  
Author(s):  
Timothy W. Armistead

Abstract The paper briefly reviews measures that have been proposed since the 1880s to assess accuracy and skill in categorical weather forecasting. The majority of the measures consist of a single expression, for example, a proportion, the difference between two proportions, a ratio, or a coefficient. Two exemplar single-expression measures for 2 × 2 categorical arrays that chronologically bracket the 130-yr history of this effort—Doolittle's inference ratio i and Stephenson's odds ratio skill score (ORSS)—are reviewed in detail. Doolittle's i is appropriately calculated using conditional probabilities, and the ORSS is a valid measure of association, but both measures are limited in ways that variously mirror all single-expression measures for categorical forecasting. The limitations that variously affect such measures include their inability to assess the separate accuracy rates of different forecast–event categories in a matrix, their sensitivity to the interdependence of forecasts in a 2 × 2 matrix, and the inapplicability of many of them to the general k × k (k ≥ 2) problem. The paper demonstrates that Wagner's unbiased hit rate, developed for use in categorical judgment studies with any k × k (k ≥ 2) array, avoids these limitations while extending the dual-measure Bayesian approach proposed by Murphy and Winkler in 1987.


2018 ◽  
Vol 15 (9) ◽  
pp. 820-827 ◽  
Author(s):  
Ryan Van Patten ◽  
Anne M. Fagan ◽  
David A.S. Kaufman

Background: There exists a need for more sensitive measures capable of detecting subtle cognitive decline due to Alzheimer's disease. Objective: To advance the literature in Alzheimer’s disease by demonstrating that performance on a cued-Stroop task is impacted by preclinical Alzheimer's disease neuropathology. Method: Twenty-nine cognitively asymptomatic older adults completed a computerized, cued-Stroop task in which accuracy rates and intraindividual variability in reaction times were the outcomes of interest. Cerebrospinal fluid biomarkers of Aβ42 and tau were measured and participants were then grouped according to a published p-tau/Aβ42 cutoff reflecting risk for Alzheimer’s disease (preclinical Alzheimer's disease = 14; control = 15). Results: ANOVAs indicated that accuracy rates did not differ between the groups but 4-second delay incongruent color-naming Stroop coefficient of variation reaction times were higher in the preclinical Alzheimer’s disease group compared to the control group, reflecting increased within-person variability. Moreover, partial correlations showed no relationships between cerebrospinal fluid biomarkers and accuracy rates. However, increases in coefficient of variation reaction times correlated with decreased Aβ42 and increases in p-tau and the p-tau/Aβ42 ratio. Conclusion: Results supported the ability of the computerized, cued-Stroop task to detect subtle Alzheimer’s disease neuropathology using a small cohort of cognitively asymptomatic older adults. The ongoing measurement of cued-Stroop coefficient of variation reaction times has both scientific and clinical utility in preclinical Alzheimer’s disease.


2014 ◽  
Vol 20 (2) ◽  
pp. 196-203 ◽  
Author(s):  
Alexander Mason ◽  
Renee Paulsen ◽  
Jason M. Babuska ◽  
Sharad Rajpal ◽  
Sigita Burneikiene ◽  
...  

Object Several retrospective studies have demonstrated higher accuracy rates and increased safety for navigated pedicle screw placement than for free-hand techniques; however, the accuracy differences between navigation systems has not been extensively studied. In some instances, 3D fluoroscopic navigation methods have been reported to not be more accurate than 2D navigation methods for pedicle screw placement. The authors of this study endeavored to identify if 3D fluoroscopic navigation methods resulted in a higher placement accuracy of pedicle screws. Methods A systematic analysis was conducted to examine pedicle screw insertion accuracy based on the use of 2D, 3D, and conventional fluoroscopic image guidance systems. A PubMed and MEDLINE database search was conducted to review the published literature that focused on the accuracy of pedicle screw placement using intraoperative, real-time fluoroscopic image guidance in spine fusion surgeries. The pedicle screw accuracy rates were segregated according to spinal level because each spinal region has individual anatomical and morphological variations. Descriptive statistics were used to compare the pedicle screw insertion accuracy rate differences among the navigation methods. Results A total of 30 studies were included in the analysis. The data were abstracted and analyzed for the following groups: 12 data sets that used conventional fluoroscopy, 8 data sets that used 2D fluoroscopic navigation, and 20 data sets that used 3D fluoroscopic navigation. These studies included 1973 patients in whom 9310 pedicle screws were inserted. With conventional fluoroscopy, 2532 of 3719 screws were inserted accurately (68.1% accuracy); with 2D fluoroscopic navigation, 1031 of 1223 screws were inserted accurately (84.3% accuracy); and with 3D fluoroscopic navigation, 4170 of 4368 screws were inserted accurately (95.5% accuracy). The accuracy rates when 3D was compared with 2D fluoroscopic navigation were also consistently higher throughout all individual spinal levels. Conclusions Three-dimensional fluoroscopic image guidance systems demonstrated a significantly higher pedicle screw placement accuracy than conventional fluoroscopy or 2D fluoroscopic image guidance methods.


Sign in / Sign up

Export Citation Format

Share Document