scholarly journals Interpretations of spoken utterance fluency in simulated and face-to-face oral proficiency interviews

2021 ◽  
Vol 4 (1) ◽  
pp. 1-18
Author(s):  
Ethan Quaid ◽  
Alex Barrett
2011 ◽  
Vol 4 (2) ◽  
pp. 169 ◽  
Author(s):  
Parviz Birjandi ◽  
Marzieh Bagherkazemi

The pressing need for English oral communication skills in multifarious contexts today is compelling impetus behind the large number of studies done on oral proficiency interviewing. Moreover, given the recently articulated concerns with the fairness and social dimension of such interviews, parallel concerns have been raised as to how most fairly to assess the oral communication skills of examinees, and what factors contribute to more skilled performance. This article sketches theory and practice on two rather competing formats of oral proficiency interviewing: face-to-face and paired. In the first place, it reviews the related literature on the alleged disadvantages of the individual format. Then, the pros and cons of the paired format are enumerated. It is discussed that the paired format has indeed met some of the criticisms leveled at individual oral proficiency interviewing. However, exploitation of the paired format as an undisputable alternative to the face-to-face format begs the question.


Author(s):  
Ethan Douglas Quaid ◽  
Alex Barrett

Semi-direct speaking tests have become an increasingly favored method of assessing spoken performance in recent years. Underpinning evidence for their continued development and use has been largely contingent on language testing and assessment researchers' claim of their interchangeability with more traditional, direct face-to-face oral proficiency interviews through theoretical and empirical investigations from multiple perspectives. This chapter initially provides background and research synopses of four significant test facets that have formed the bases for semi-direct and direct speaking test comparison studies. These are followed by the inclusion of a recent case study comparing test taker output from a computer-based Aptis speaking test and a purposively developed identical face-to-face oral proficiency interview that found a slight register shift which may be viewed as advantageous for semi-direct speaking tests. Finally, future research directions are proposed in light of the recent developments in the semi-direct speaking testing research presented throughout this chapter.


Author(s):  
Ethan Douglas Quaid

The present trend in developing and using semi-direct speaking tests has been supported by test developers and researchers' claim of their increased practicality, higher reliability and concurrent validity with test scores in direct oral proficiency interviews. However, it is universally agreed within the language testing and assessment community that interchangeability must be investigated from multiple perspectives. This study compared test taker output from a computer-based Aptis General speaking test and a purposively developed identical face-to-face direct oral proficiency interview using a counterbalanced research design. Within subject analyses of salient output features identified in prior related research were completed. Results showed that test taker output in the computer-based test was less contextualised, with minimally higher lexical density and syntactic complexity. Given these findings, the indicated slight register shift in output may be viewed as non-consequential, or even as advantageous, for semi-direct speaking tests.


2020 ◽  
Vol 16 (1) ◽  
pp. 87-121
Author(s):  
Bárbara Eizaga-Rebollar ◽  
Cristina Heras-Ramírez

AbstractThe study of pragmatic competence has gained increasing importance within second language assessment over the last three decades. However, its study in L2 language testing is still scarce. The aim of this paper is to research the extent to which pragmatic competence as defined by the Common European Framework of Reference for Languages (CEFR) has been accommodated in the task descriptions and rating scales of two of the most popular Oral Proficiency Interviews (OPIs) at a C1 level: Cambridge’s Certificate in Advanced English (CAE) and Trinity’s Integrated Skills in English (ISE) III. To carry out this research, OPI tests are first defined, highlighting their differences from L2 pragmatic tests. After pragmatic competence in the CEFR is examined, focusing on the updates in the new descriptors, CAE and ISE III formats, structure and task characteristics are compared, showing that, while the formats and some characteristics are found to differ, the structures and task types are comparable. Finally, we systematically analyse CEFR pragmatic competence in the task skills and rating scale descriptors of both OPIs. The findings show that the task descriptions incorporate mostly aspects of discourse and design competence. Additionally, we find that each OPI is seen to prioritise different aspects of pragmatic competence within their rating scale, with CAE focusing mostly on discourse competence and fluency, and ISE III on functional competence. Our study shows that the tests fail to fully accommodate all aspects of pragmatic competence in the task skills and rating scales, although the aspects they do incorporate follow the CEFR descriptors on pragmatic competence. It also reveals a mismatch between the task competences being tested and the rating scale. To conclude, some research lines are proposed.


Author(s):  
Hiroshi Hasegawa ◽  
Julian Chen ◽  
Teagan Collopy

This chapter explores the effectiveness of computerised oral testing on Japanese learners' test experiences and associated affective factors in a Japanese program at the Australian tertiary level. The study investigates (1) Japanese beginners' attitudes towards the feasibility of utilising a computer-generated program vs. a tutor-fronted oral interview to assess their oral proficiency, and (2) the challenges and implications of computerised oral testing vis-à-vis Japanese beginners. It presents the initial findings of the qualitatively analysed data collected from student responses to open-ended survey questions and follow-up semi-structured interviews. A thematic analysis approach was employed to examine student perceptions of the two different test settings and their effects on students' oral performance in relation to test anxiety. Despite the fact that computerised oral testing was overall perceived to be beneficial for streamlining the test process and reducing learners' test anxiety, the findings also identified its limitations.


2007 ◽  
Vol 39 (11) ◽  
pp. 2045-2070 ◽  
Author(s):  
Gabriele Kasper ◽  
Steven J. Ross

Sign in / Sign up

Export Citation Format

Share Document