oral proficiency interview
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 7)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
pp. 347-360
Author(s):  
Rachel McKee ◽  
Sara Pivac Alexander ◽  
Wenda Walton

The Sign Language Proficiency Interview (SLPI) was modeled on the Oral Proficiency Interview (OPI) in the 1980s in North America and has been adapted for various national signed languages. To date, there has been no published analysis of interview discourse in the SLPI. This chapter examines accommodative question strategies used by deaf interviewers in New Zealand SLPI interviews. Findings reveal that interviewers use interlocutor support strategies that parallel accommodative question types described for OPI interviews and features of spontaneous interaction between fluent and novice signers. Sixty-six percent of questions had accommodative features, which were more frequent with lower proficiency candidates. Evidence of interviewer “helping” strategies is useful for training interviewers and refining the construct of the SLPI.


Author(s):  
Dalia L. Garcia ◽  
Tamar H. Gollan

Abstract Objectives: The present study examined if time-pressured administration of an expanded Multilingual Naming Test (MINT) would improve or compromise assessment of bilingual language proficiency and language dominance. Methods: Eighty Spanish–English bilinguals viewed a grid with 80 MINT-Sprint pictures and were asked to name as many pictures as possible in 3 min in each language in counterbalanced order. An Oral Proficiency Interview rated by four native Spanish–English bilinguals provided independent assessment of proficiency level. Bilinguals also self-rated their proficiency, completed two subtests of the Woodcock-Muñoz, and a speeded translation recognition test. We compared scores after 2 min, a first-pass through all the pictures, and a second-pass in which bilinguals were prompted to try to name skipped items. Results: The MINT Sprint and a subset score including original MINT items were highly correlated with Oral Proficiency Interview scores for predicting the degree of language dominance – matching or outperforming all other measures. Self-ratings provided weaker measures (especially of degree of balance – i.e., bilingual index scores) and did not explain any unique variance in measuring the degree of language dominance when considered together with second-pass naming scores. The 2-min scoring procedure did not improve and appeared not to hamper assessment of absolute proficiency level but prompting to try to name skipped items improved assessment of language dominance and naming scores, especially in the nondominant language. Conclusions: Time-pressured rapid naming saves time without significantly compromising assessment of proficiency level. However, breadth of vocabulary knowledge may be as important as retrieval speed for maximizing the accuracy in proficiency assessment.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Gwyneth Gates ◽  
Troy L. Cox ◽  
Teresa Reber Bell ◽  
William Eggington

Abstract Two assumptions of speaking proficiency tests are that the speech produced is spontaneous and the the scores on those tests predict what examinees can do in real-world communicative situations. Therefore, when examinees memorize scripts for their oral responses, the validity of the score interpretation is threatened. While the American Council on the Teaching of Foreign Languages (ACTFL) Proficiency Guidelines identify rehearsed content as a major hindrance to interviewees being rated above Novice High, many examinees still prepare for speaking tests by memorizing and rehearsing scripts hoping these "performances" are awarded higher scores. To investigate this phenomenon, researchers screened 300 previously rated Oral Proficiency Interview-computer (OPIc) tests and found 39 examinees who had at least one response that had been tagged as rehearsed. Each examinee’s responses were then transcribed, and the spontaneous and rehearsed tasks were compared. Temporal fluency articulation rates differed significantly between the spontaneous and rehearsed segments; however, the strongest evidence of memorization lay in the transcriptions and the patterns that emerged within and across interviews. Test developers, therefore, need to be vigilant in creating scoring guidelines for rehearsed content.


2019 ◽  
Vol 36 (3) ◽  
pp. 467-477 ◽  
Author(s):  
Dan Isbell ◽  
Paula Winke

2018 ◽  
Vol 35 (3) ◽  
pp. 357-375
Author(s):  
Steven Ross

Interactional competence has been variously defined as turn-taking ability, paralinguistic features of communication such as eye contact, gesture, and gesticulation, and listener responses. In existing assessment systems such as the oral proficiency interview (OPI), interactional competence is only rarely explicitly factored into the holistic task-based rating system. The present article explores the potential relevance of a facet of interactional competence, listener response, in contrastive interviewers conducted structurally distinct languages, Japanese and English. The analytic focus through micro-analysis of the interview interaction aims to consider evidence of how the candidate’s listener responses audible through backchannels might be consistently identified as distinct from existing rating criteria such as fluency, accuracy, and coherence, and whether listener responses as interactional competence might be distinct from, or subsumable under, these facets of speaker proficiency.


Author(s):  
Ethan Douglas Quaid

The present trend in developing and using semi-direct speaking tests has been supported by test developers and researchers' claim of their increased practicality, higher reliability and concurrent validity with test scores in direct oral proficiency interviews. However, it is universally agreed within the language testing and assessment community that interchangeability must be investigated from multiple perspectives. This study compared test taker output from a computer-based Aptis General speaking test and a purposively developed identical face-to-face direct oral proficiency interview using a counterbalanced research design. Within subject analyses of salient output features identified in prior related research were completed. Results showed that test taker output in the computer-based test was less contextualised, with minimally higher lexical density and syntactic complexity. Given these findings, the indicated slight register shift in output may be viewed as non-consequential, or even as advantageous, for semi-direct speaking tests.


2018 ◽  
Vol 7 (4) ◽  
pp. 150-154
Author(s):  
Ehsan Kazemi

This study investigates the effect of using a bigger vocabulary size in oral classroom presentations on the speaking proficiency of students in English as a foreign language. The study was conducted with 30 freshman students doing their listening and speaking course in Semnan University. For the entire course of 12 weeks, the students in the experimental group were asked to present their productions in terms of the vocabulary they employed, which was also the focus of the teacher’s evaluation in each session. At the end of the course, they were interviewed for their proficiency in speaking. The descriptive and inferential calculations were done based on a modified version of an oral proficiency interview scale suggested by Penny Ur. The answers were recorded and their fluency and accuracy were graded. The results suggest that students with a vocabulary-rich production improved their speaking proficiency in English more than other students did.   Keywords: Vocabulary size, speaking proficiency, production, fluency, accuracy, interview.    


Sign in / Sign up

Export Citation Format

Share Document