scholarly journals The benefits of impossible tests: Assessing the role of error-correction in the pretesting effect

Author(s):  
Tina Seabrooke ◽  
Chris J. Mitchell ◽  
Andy J. Wills ◽  
Angus B. Inkster ◽  
Timothy J. Hollins

AbstractRelative to studying alone, guessing the meanings of unknown words can improve later recognition of their meanings, even if those guesses were incorrect – the pretesting effect (PTE). The error-correction hypothesis suggests that incorrect guesses produce error signals that promote memory for the meanings when they are revealed. The current research sought to test the error-correction explanation of the PTE. In three experiments, participants studied unfamiliar Finnish-English word pairs by either studying each complete pair or by guessing the English translation before its presentation. In the latter case, the participants also guessed which of two categories the word belonged to. Hence, guesses from the correct category were semantically closer to the true translation than guesses from the incorrect category. In Experiment 1, guessing increased subsequent recognition of the English translations, especially for translations that were presented on trials in which the participants’ guesses were from the correct category. Experiment 2 replicated these target recognition effects while also demonstrating that they do not extend to associative recognition performance. Experiment 3 again replicated the target recognition pattern, while also examining participants’ metacognitive recognition judgments. Participants correctly judged that their memory would be better after small than after large errors, but incorrectly believed that making any errors would be detrimental, relative to study-only. Overall, the data are inconsistent with the error-correction hypothesis; small, within-category errors produced better recognition than large, cross-category errors. Alternative theories, based on elaborative encoding and motivated learning, are considered.

2000 ◽  
Vol 2 (1) ◽  
pp. 107-123 ◽  
Author(s):  
Muzaffar Iqbal

This article attempts to present a comparative study of the role of two twentieth-century English translations of the Qur'an: cAbdullah Yūsuf cAlī's The Meaning of the Glorious Qur'ān and Muḥammad Asad's The Message of the Qur'ān. No two men could have been more different in their background, social and political milieu and life experiences than Yūsuf cAlī and Asad. Yūsuf 'Alī was born and raised in British India and had a brilliant but traditional middle-class academic career. Asad traversed a vast cultural and geographical terrain: from a highly-disciplined childhood in Europe to the deserts of Arabia. Both men lived ‘intensely’ and with deep spiritual yearning. At some time in each of their lives they decided to embark upon the translation of the Qur'an. Their efforts have provided us with two incredibly rich monumental works, which both reflect their own unique approaches and the effects of the times and circumstances in which they lived. A comparative study of these two translations can provide rich insights into the exegesis and the phenomenon of human understanding of the divine text.


2005 ◽  
Vol 36 (3) ◽  
pp. 219-229 ◽  
Author(s):  
Peggy Nelson ◽  
Kathryn Kohnert ◽  
Sabina Sabur ◽  
Daniel Shaw

Purpose: Two studies were conducted to investigate the effects of classroom noise on attention and speech perception in native Spanish-speaking second graders learning English as their second language (L2) as compared to English-only-speaking (EO) peers. Method: Study 1 measured children’s on-task behavior during instructional activities with and without soundfield amplification. Study 2 measured the effects of noise (+10 dB signal-to-noise ratio) using an experimental English word recognition task. Results: Findings from Study 1 revealed no significant condition (pre/postamplification) or group differences in observations in on-task performance. Main findings from Study 2 were that word recognition performance declined significantly for both L2 and EO groups in the noise condition; however, the impact was disproportionately greater for the L2 group. Clinical Implications: Children learning in their L2 appear to be at a distinct disadvantage when listening in rooms with typical noise and reverberation. Speech-language pathologists and audiologists should collaborate to inform teachers, help reduce classroom noise, increase signal levels, and improve access to spoken language for L2 learners.


2006 ◽  
Vol 21 (3) ◽  
pp. 205-226 ◽  
Author(s):  
Sandy K. Magee ◽  
Janet Ellis

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


2021 ◽  
Vol 13 (10) ◽  
pp. 265
Author(s):  
Jie Chen ◽  
Bing Han ◽  
Xufeng Ma ◽  
Jian Zhang

Underwater target recognition is an important supporting technology for the development of marine resources, which is mainly limited by the purity of feature extraction and the universality of recognition schemes. The low-frequency analysis and recording (LOFAR) spectrum is one of the key features of the underwater target, which can be used for feature extraction. However, the complex underwater environment noise and the extremely low signal-to-noise ratio of the target signal lead to breakpoints in the LOFAR spectrum, which seriously hinders the underwater target recognition. To overcome this issue and to further improve the recognition performance, we adopted a deep-learning approach for underwater target recognition, and a novel LOFAR spectrum enhancement (LSE)-based underwater target-recognition scheme was proposed, which consists of preprocessing, offline training, and online testing. In preprocessing, we specifically design a LOFAR spectrum enhancement based on multi-step decision algorithm to recover the breakpoints in LOFAR spectrum. In offline training, the enhanced LOFAR spectrum is adopted as the input of convolutional neural network (CNN) and a LOFAR-based CNN (LOFAR-CNN) for online recognition is developed. Taking advantage of the powerful capability of CNN in feature extraction, the recognition accuracy can be further improved by the proposed LOFAR-CNN. Finally, extensive simulation results demonstrate that the LOFAR-CNN network can achieve a recognition accuracy of 95.22%, which outperforms the state-of-the-art methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Junhua Wang ◽  
Yuan Jiang

For the problem of synthetic aperture radar (SAR) image target recognition, a method via combination of multilevel deep features is proposed. The residual network (ResNet) is used to learn the multilevel deep features of SAR images. Based on the similarity measure, the multilevel deep features are clustered and several feature sets are obtained. Then, each feature set is characterized and classified by the joint sparse representation (JSR), and the corresponding output result is obtained. Finally, the results of different feature sets are combined using the weighted fusion to obtain the target recognition results. The proposed method in this paper can effectively combine the advantages of ResNet and JSR in feature extraction and classification and improve the overall recognition performance. Experiments and analysis are carried out on the MSTAR dataset with rich samples. The results show that the proposed method can achieve superior performance for 10 types of target samples under the standard operating condition (SOC), noise interference, and occlusion conditions, which verifies its effectiveness.


Author(s):  
Corwin A. Bennett ◽  
Samuel H. Winterstein ◽  
Robert E. Kent

The terminology and literature in the area of image quality and target recognition are reviewed. An experiment in which subjects recognized strategic and tactical targets in aerial photographs with controlled image degradations is described. Some findings are: Recognition performance is only moderate for representative conditions. There are wide differences among target types in the recognizability. Knowledge of a target's presence (briefing) greatly aids recognition. Better resolution means better performance. Enlarging the image such that a line of resolution subtends more than three minutes of arc hinders recognition. Grain size should be kept below 20 seconds of arc. It is suggested that the eventual application of the modulation transfer function approach to measurement of image quality and target characteristics will enable a quantitative subsuming of various quality-size relationships. More attention needs to be paid in recognition research to suitable task definition, target description, and subject selection.


Author(s):  
Sehchang Hah ◽  
Deborah A. Reisweber ◽  
Jose A. Picart ◽  
Harry Zwick

Sign in / Sign up

Export Citation Format

Share Document