oral proficiency interviews
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 5)

H-INDEX

12
(FIVE YEARS 1)

2020 ◽  
Vol 16 (1) ◽  
pp. 87-121
Author(s):  
Bárbara Eizaga-Rebollar ◽  
Cristina Heras-Ramírez

AbstractThe study of pragmatic competence has gained increasing importance within second language assessment over the last three decades. However, its study in L2 language testing is still scarce. The aim of this paper is to research the extent to which pragmatic competence as defined by the Common European Framework of Reference for Languages (CEFR) has been accommodated in the task descriptions and rating scales of two of the most popular Oral Proficiency Interviews (OPIs) at a C1 level: Cambridge’s Certificate in Advanced English (CAE) and Trinity’s Integrated Skills in English (ISE) III. To carry out this research, OPI tests are first defined, highlighting their differences from L2 pragmatic tests. After pragmatic competence in the CEFR is examined, focusing on the updates in the new descriptors, CAE and ISE III formats, structure and task characteristics are compared, showing that, while the formats and some characteristics are found to differ, the structures and task types are comparable. Finally, we systematically analyse CEFR pragmatic competence in the task skills and rating scale descriptors of both OPIs. The findings show that the task descriptions incorporate mostly aspects of discourse and design competence. Additionally, we find that each OPI is seen to prioritise different aspects of pragmatic competence within their rating scale, with CAE focusing mostly on discourse competence and fluency, and ISE III on functional competence. Our study shows that the tests fail to fully accommodate all aspects of pragmatic competence in the task skills and rating scales, although the aspects they do incorporate follow the CEFR descriptors on pragmatic competence. It also reveals a mismatch between the task competences being tested and the rating scale. To conclude, some research lines are proposed.


Author(s):  
Ethan Douglas Quaid ◽  
Alex Barrett

Semi-direct speaking tests have become an increasingly favored method of assessing spoken performance in recent years. Underpinning evidence for their continued development and use has been largely contingent on language testing and assessment researchers' claim of their interchangeability with more traditional, direct face-to-face oral proficiency interviews through theoretical and empirical investigations from multiple perspectives. This chapter initially provides background and research synopses of four significant test facets that have formed the bases for semi-direct and direct speaking test comparison studies. These are followed by the inclusion of a recent case study comparing test taker output from a computer-based Aptis speaking test and a purposively developed identical face-to-face oral proficiency interview that found a slight register shift which may be viewed as advantageous for semi-direct speaking tests. Finally, future research directions are proposed in light of the recent developments in the semi-direct speaking testing research presented throughout this chapter.


Author(s):  
Monika Sobejko

The paired format of a speaking test appears to off er a more interactionally symmetric alternative to traditionally used Oral Proficiency Interviews (OPIs) (e.g., Lazaraton, 1992, 1996; van Lier, 1989). Th is article focuses on two paired tasks, a simulated discussion and a problem-solving task, which have been developed to enhance the construct of the speaking test which is currently used at the Jagiellonian Language Centre at the Jagiellonian University.


Author(s):  
Ethan Douglas Quaid

The present trend in developing and using semi-direct speaking tests has been supported by test developers and researchers' claim of their increased practicality, higher reliability and concurrent validity with test scores in direct oral proficiency interviews. However, it is universally agreed within the language testing and assessment community that interchangeability must be investigated from multiple perspectives. This study compared test taker output from a computer-based Aptis General speaking test and a purposively developed identical face-to-face direct oral proficiency interview using a counterbalanced research design. Within subject analyses of salient output features identified in prior related research were completed. Results showed that test taker output in the computer-based test was less contextualised, with minimally higher lexical density and syntactic complexity. Given these findings, the indicated slight register shift in output may be viewed as non-consequential, or even as advantageous, for semi-direct speaking tests.


Sign in / Sign up

Export Citation Format

Share Document