Multimedia as a Test Method Facet in Oral Proficiency Tests

2009 ◽  
Vol 5 (1) ◽  
pp. 37-48 ◽  
Author(s):  
Seyyed Abbas Mousavi
1996 ◽  
Vol 13 (2) ◽  
pp. 125-150 ◽  
Author(s):  
Lyle F. Bachman ◽  
Fred Davidson ◽  
Michael Milanovic

Author(s):  
Erica Sandlund ◽  
Pia Sundqvist

Abstract Presumably most students strive to do well in school and on national tests. However, even in standardized tests, students’ and examiners’ expectations on what it means to ‘do well’ may diverge in ways that are consequential to performance and assessment. In this paper, we examine how students and teachers in an L2 English peer–peer speaking national test (9th grade) display their understandings of appropriate ways of dealing with pre-set discussion tasks. Using conversation analysis and 38 recorded national tests in English in Sweden, we demonstrate, e.g., how teachers’ displayed understandings of how tasks should be appropriately handled steer the interactional trajectory between students in particular directions. The analysis shows that participants spend much time on negotiating understandings of the task-at-hand. We argue that in terms of valid assessment of oral proficiency, task understandings merit more attention, as task negotiations inevitably generate different conditions for different dyads and teachers.


1997 ◽  
Vol 20 (1) ◽  
pp. 21-41 ◽  
Author(s):  
Caterina Cafarella

Abstract In oral proficiency tests there are occurrences of trouble in interaction such as misunderstanding, non hearing or lack of understanding which may cause breakdown in communication. Within the context of the question-answer framework of an oral proficiency test this study investigates the interactive nature of spoken discourse between students and assessors when there is trouble in talk as perceived by the assessors, with a focus on how they accommodate to the students. A sample of twenty oral transcripts and tapes of the 1992 Victorian Certificate of Education (V.C.E.) Italian Common Assessment Task (C.A.T. 2) were randomly selected and examined. By using Conversation Analysis methodology the purpose of the study was to investigate in repair sequences types of assessor accommodation – how the assessors modified their utterances – the kinds of trouble perceived by assessors, what triggered assessor accommodation and whether the accommodations facilitated student response and participation. This study has implications for assessor training since it highlights which strategies are most successful for ensuring student understanding, participation and appropriate responses as well as demonstrating why and in which environments assessors accommodate.


1996 ◽  
Vol 54 ◽  
pp. 145-152
Author(s):  
Peter Paffen

In 1988 CITO started research into the feasibility of valid and reliable oral proficiency tests based on communicative principles. This was to meet the demand for a communicative speech test to be used in school based examinations in secondary education. Using the Test of Spoken English as a guideline, tests for French, German and English were developed. Simultaneous research into the reliability and validity of the tests led to various adaptations of the original model. From 1992 onwards oral proficiency tests for each of the three languages in question have been published at levels VBO/MAVO, HAVO and VWO (approxi-mately: vocational, secondary modern and grammar school). The results of a user inquiry held in 1994 led to a number of further changes to improve the user-friend-liness of the tests. Early in 1996 a new research project concerning the reliability and validity of the tests was started. The results will be published in the autumn of 1996.


1988 ◽  
Vol 10 (2) ◽  
pp. 149-164 ◽  
Author(s):  
Lyle F. Bachman

The primary problems in measuring speaking ability through an oral interview procedure are not those related to efficiency or reliability, but rather those associated with examining the validity of the interview ratings as measures of ability in speaking and of the uses that are made of such ratings. In order to examine all aspects of validity, the abilities measured must be clearly distinguished from the elicitation procedures, in both the design of the interview and in the interpretation of ratings.Research from applied linguistics and language testing is consistent with the position that language proficiency consists of several distinct but related abilities. Research from language testing also indicates that the methods used to measure language ability have an important effect on test performance. Two frameworks—one of communicative language ability and the other of test method facets—are proposed as a basis for distinguishing abilities from elicitation procedures and for informing a program of empirical research and development.The validity of the ACTFL Oral Proficiency Interview (OPI) as it is currently designed and used cannot be adequately examined, much less demonstrated, because it confounds abilities with elicitation procedures in its design, and it provides only a single rating, which has no basis in either theory or research. As test developer, ACTFL has yet to fully discharge its responsibility for providing sufficient evidence of validity to support uses that are made of OPI ratings.


2018 ◽  
Vol 40 (6) ◽  
pp. 894-916 ◽  
Author(s):  
Takanori Sato ◽  
Tim McNamara

Abstract Applied linguists have developed complex theories of the ability to communicate in a second language (L2). However, the perspectives on L2 communication ability of speakers who are not trained language professionals have been incorporated neither into theories of communication ability nor in the criteria for assessing performance on general-purpose oral proficiency tests. This potentially weakens the validity of such tests because the ultimate arbiters of L2 speakers’ oral performance are not trained language professionals. This study investigates the perspectives of these linguistic laypersons on L2 communication ability. Twenty-three native and non-native English-speaking linguistic laypersons judged L2 speakers’ oral performances and verbalized the reasons for their judgments. The results showed that the participants focus not only on the linguistic aspects of the speaker’s output but also on features that applied linguists have less paid attention to. Even where speaker’s linguistic errors were acknowledged, message conveyance and comprehensibility of the message contributed to their judgment. The study has implications for language testing and the development of tests reflecting the construct of English as a lingua franca.


1977 ◽  
Vol 61 (3) ◽  
pp. 136
Author(s):  
Thomas O. Bell ◽  
Vincent Doyle ◽  
Brian L. Talbott ◽  
Joseph H. Matluck ◽  
Betty J. Mace-Matluck

Sign in / Sign up

Export Citation Format

Share Document