Aptitude testing over the years

Interpreting ◽  
2011 ◽  
Vol 13 (1) ◽  
pp. 5-30 ◽  
Author(s):  
Mariachiara Russo

In the present paper I review the existing literature on aptitude testing with a view to highlighting the main emerging themes: which qualities indicate an aptitude in a prospective interpreter, how these qualities may be measured and which types of test should be administered, the issue of valid and reliable testing, proposals for test designs, and, finally, description of aptitude tests which have identified statistically significant predictors. The focus is on spoken language, but signed-language aptitude testing is also partially covered. Available results so far appear to show that interpreting-related cognitive skills and verbal fluency may be measured and may be predictive both for spoken-language and for signed-language interpreting candidates. In particular, the production of synonyms appears to be a strong aptitude predictor from several independent research projects.

2016 ◽  
Vol 54 ◽  
pp. 127-150
Author(s):  
Adam Świątek

The beginning of the aptitude concept, as well as aptitude testing, or, in other words, measuring one’s predispositions to foreign languages dates back to the 1920’s when the first aptitude tests, like the Iowa Foreign Language Aptitude Examination, or the Luria-Orleans Language Prognosis Test came into existence Carroll 1962. Since that time, aptitude tests have gone through a multitude of different transformations — from simple testing tools that resembled intelligence tests to an advanced computer-based version of the most influential Modern Language Aptitude Test developed by Carroll and Sapon in 1959 Dörnyei and Skehan 2003. Apart from them, researchers from different countries have attempted to create their own unique versions of the MLAT, like the Hungarian HUNLAT battery Safar and Kormos 2008, the Polish version named TUNJO Rysiewicz 2011, or even the CANAL-F battery based on an artificial language Kocic 2010. Therefore, the main goal of this paper is to provide a thorough theoretical analysis and review of the available aptitude testing batteries and find the differences and similarities between them. What is more, the paper aims to describe the components of all the possible aptitude tests and discover the potential behind the testing tools that examine one’s natural predispositions effectively. Apart from the general knowledge about aptitude testing available anywhere nowadays, it is necessary to understand how the tests work, and what they expect from a participant taking part in such an initiative. As they are often compared with intelligence tests, the purpose of this paper is to show that aptitude tests constitute a different tool, and measure different abilities and skills than a set of intelligence related instruments. To reach this goal, I examine the tools available, describe their properties and potential success rate, analyze their components and compare them with the other batteries.


2016 ◽  
Vol 20 (1) ◽  
pp. 42-48 ◽  
Author(s):  
MARCEL R. GIEZEN ◽  
KAREN EMMOREY

Many bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL–English bilinguals retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely account for bilingual disadvantages in verbal fluency. Instead, retrieval difficulties likely reflect between-language interference. Furthermore, it suggests that the two languages of bilinguals compete for selection even when they are expressed with distinct articulators.


In Language Assessment Across Modalities: Paired-Papers on Signed and Spoken Language Assessment, volume editors Tobias Haug, Wolfgang Mann, and Ute Knoch bring together—for the first time—researchers, clinicians, and practitioners from two different fields: signed language and spoken language. The volume examines theoretical and practical issues related to 12 topics ranging from test development and language assessment of bi-/multilingual learners to construct issues of second-language assessment (including the Common European Framework of Reference [CEFR]) and language assessment literacy in second-language assessment contexts. Each topic is addressed separately for spoken and signed language by experts from the relevant field. This is followed by a joint discussion in which the chapter authors highlight key issues in each field and their possible implications for the other field. What makes this volume unique is that it is the first of its kind to bring experts from signed and spoken language assessment to the same table. The dialogues that result from this collaboration not only help to establish a shared appreciation and understanding of challenges experienced in the new field of signed language assessment but also breathes new life into and provides a new perspective on some of the issues that have occupied the field of spoken language assessment for decades. It is hoped that this will open the door to new and exciting cross-disciplinary collaborations.


2021 ◽  
pp. 145-152
Author(s):  
Amy Kissel Frisbie ◽  
Aaron Shield ◽  
Deborah Mood ◽  
Nicole Salamy ◽  
Jonathan Henner

This chapter is a joint discussion of key items presented in Chapters 4.1 and 4.2 related to the assessment of deaf and hearing children on the autism spectrum . From these chapters it becomes apparent that a number of aspects associated with signed language assessment are relevant to spoken language assessment. For example, there are several precautions to bear in mind about language assessments obtained via an interpreter. Some of these precautions apply solely to D/HH children, while others are applicable to assessments with hearing children in multilingual contexts. Equally, there are some aspects of spoken language assessment that can be applied to signed language assessment. These include the importance of assessing pragmatic language skills, assessing multiple areas of language development, differentiating between ASD and other developmental disorders, and completing the language evaluation within a developmental framework. The authors conclude with suggestions for both spoken and signed language assessment.


2021 ◽  
pp. 329-332
Author(s):  
Tobias Haug ◽  
Ute Knoch ◽  
Wolfgang Mann

This chapter is a joint discussion of key items related to scoring issues related to signed and spoken language assessment that were discussed in Chapters 9.1 and 9.2. One aspect of signed language assessment that has the potential to stimulate new research in spoken second language (L2) assessment is the scoring of nonverbal speaker behaviors. This aspect is rarely represented in the scoring criteria of spoken assessments and in many cases not even available to raters during the scoring process. The authors argue, therefore, for a broadening of the construct of spoken language assessment to also include elements of nonverbal communication in the scoring descriptors. Additionally, the importance of rater training for signed language assessments, application of Rasch analysis to investigate possible reasons of disagreement between raters, and the need to conduct research on rasting scales are discussed.


2020 ◽  
Vol 35 (6) ◽  
pp. 991-991
Author(s):  
Vickery A ◽  
Moses J ◽  
Boese A ◽  
Maciel R ◽  
Lyu J

Abstract Objective The goal of this study is to examine the cognitive factors that account for omission errors on the Benton Visual Retention Test (BVRT) copy and memory trials using factorial indices based on raw subtest scores of the Wechsler Adult Intelligence Scale-III (WAIS-III) and the Multilingual Aphasia Examination (MAE). Method Participants were referred for assessment at the VA Palo Alto Health Care System. One hundred and forty-three participants were sampled. BVRT omission error scores for the copy and memory trials were factor analyzed with age, education level, WAIS-III Digit Span Forward (DSpF), and Letter-Number Sequencing (LNS). These variables were refactored with the spoken language components of the MAE (naming, repetition, verbal fluency, and auditory comprehension). Results BVRT copy and memory omission scores were factorially grouped with age and inversely correlated with LNS. A second factor was composed of positive loadings on DSpF, LNS, and education. The BVRT Copy-and-Memory-Omissions-Age-LNS component was inversely and specifically related to the MAE measure of auditory comprehension. The Digit Span Forward-LNS-Education variable loaded strongly on the MAE Repetition component and secondarily on the MAE Verbal Fluency and Naming components. Conclusions BVRT copy and memory trial omission errors are strongly and specifically related to failure of auditory comprehension. Errors of this type are not related to the other three components of spoken language.


2001 ◽  
Vol 4 (1-2) ◽  
pp. 29-45 ◽  
Author(s):  
Elena Antinoro Pizzuto ◽  
Paola Pietrandrea

This paper focuses on some of the major methodological and theoretical problems raised by the fact that there are currently no appropriate notation tools for analyzing and describing signed language texts. We propose to approach these problems taking into account the fact that all signed languages are at present languages without a written tradition. We describe and discuss examples of the gloss-based notation that is currently most widely used in the analysis of signed texts. We briefly consider the somewhat paradoxical problem posed by the difficulty of applying the notation developed for individual signs to signs connected in texts, and the more general problem of clearly identifying and characterizing the constituent units of signed texts. We then compare the use of glosses in signed and spoken language research, and we examine the major pitfalls we see in the use of glosses as a primary means to explore and describe the structure of signed languages. On this basis, we try to specify as explicitly as possible what can or cannot be learned about the structure of signed languages using a gloss-based notation, and to provide some indications for future work that may aim to overcome the limitations of this notation.


1971 ◽  
Vol 29 (1) ◽  
pp. 331-337 ◽  
Author(s):  
James J. Asher

This study was designed to find the reliability and validity of the selection interview used to predict student success after 6 wk. of intensive training in Arabic, German, Hungarian, Polish, Russian, Turkish, and Vietnamese. The criterion measures were grades and instructor ratings for listening, speaking, reading, and writing. The results indicated that interview reliability and validity were a function of range restriction. Given minimal range restriction, interrater reliability was extremely high and validity was substantial. Intelligence and language aptitude tests showed even higher validity than the interview.


Sign in / Sign up

Export Citation Format

Share Document