scholarly journals Exploring Perceptions of Second Language Speech Fluency Through Developing and Piloting a Rating Scale for a Paired Conversational Task

2020 ◽  
Author(s):  
Kent Williams
2020 ◽  
Author(s):  
Parvaneh Tavakoli ◽  
Clare Wright

2020 ◽  
Vol 2 ◽  
pp. 1-15
Author(s):  
Aicha Rahal ◽  
Chokri Smaoui

Fossilization is said to be a distinctive characteristic of second language (L2) learning (Selinker, 1972, 1996; Han, 2004). It is the most pervasive among adult L2 learners (Han and Odlin, 2006). This linguistic phenomenon has been characterized by cessation of learning, even though the learner is exposed to frequent input. Based on the findings of the MA dissertation of the first researcher which is about ‘phonetic fossilization’ and where she conducted a longitudinal study, Han’s Selective Fossilization Hypothesis (SFL) is used to analyze the obtained fossilized phonetic errors in relation to L1 markedness and L2 robustness with a particular focus on fossilized vowel sounds. This is an analytical model for identifying both acquisitional and fossilizable linguistic features based on learners’ first language (L1) markedness and second language (L2) robustness. The article first gives an overview of the theory of Interlanguage and the phenomenon of fossilization. Then, it introduces SFL. This is an attempt to study fossilization scientifically. In other words, it tests the predictive power of a developed L1 Markedness and L2 Robustness rating scale based on Han’s (2009) model. The present study has pedagogic implications; it is an opportunity to raise teachers’ awareness on this common linguistic phenomenon.


2021 ◽  
pp. 335-346
Author(s):  
Kellie Frost

Discourse analysis has been widely used in the field of language testing. This chapter provides an overview of research examining features of test-taker discourse across different task types and under different task conditions and the extent to which these features align with rating scale criteria. Attention is also drawn to discourse analytic studies of the language demands of study and work domains and the extent to which test tasks can elicit relevant features. The chapter concludes by reflecting on the challenges posed to existing high-stakes test constructs by increasing diversity in universities and workplaces and the potential for discourse analytic approaches to establish stronger alignments between testing practices and the aspects of spoken discourse relevant and valued in communication.


2019 ◽  
Vol 36 (4) ◽  
pp. 505-526 ◽  
Author(s):  
Stefan O’Grady

This study investigated the impact of different lengths of pre-task planning time on performance in a test of second language speaking ability for university admission. In the study, 47 Turkish-speaking learners of English took a test of English language speaking ability. The participants were divided into two groups according to their language proficiency, which was estimated through a paper-based English placement test. They each completed four monologue tasks: two picture-based narrative tasks and two description tasks. In a balanced design, each test taker was allowed a different length of planning time before responding to each of the four tasks. The four planning conditions were 30 seconds, 1 minute, 5 minutes, and 10 minutes. Trained raters awarded scores to the test takers using an analytic rating scale and a context-specific, binary-choice rating scale, designed specifically for the study. The results of the rater scores were analysed by using a multifaceted Rasch measurement. The impact of pre-task planning on test scores was found to be influenced by four variables: the rating scale; the task type that test takers completed; the length of planning time provided; and the test takers’ levels of proficiency in the second language. Increases in scores were larger on the picture-based narrative tasks than on the two description tasks. The results also revealed a relationship between proficiency and pre-task planning, whereby statistical significance was only reached for the increases in the scores of the lowest-level test takers. Regarding the amount of planning time, the 5-minute planning condition led to the largest overall increases in scores. The research findings offer contributions to the study of pre-task planning and will be of particular interest to institutions seeking to assess the speaking ability of prospective students in English-medium educational environments.


1996 ◽  
Vol 13 ◽  
pp. 55-79 ◽  
Author(s):  
Carolyn E. Turner ◽  
John A. Upshur

Abstract The two most common approaches to rating second language performance pose problems of reliability and validity. An alternative method utilizes rating scales that are empirically derived from samples of learner performance; these scales define boundaries between adjacent score levels rather than provide normative descriptions of ideal performances; the rating process requires making two or three binary choices about a language performance being rated. A procedure, that consists of a series of five explicit tasks, is used to construct a rating scale. The scale is designed for use with a specific population and a specific test task. A group of primary school ESL teachers used this procedure to make two speaking tests, including elicitation tasks and rating scales, for use in their school district. The tests were administered to 255 sixth grade learners. The scales were found to be highly accurate for scoring short speech samples, and were quite efficient in time required for scale development and rater training. Scales exhibit content relevance in the instructional setting. Development of this type of scale is recommended for use in high-stakes assessment.


2015 ◽  
Vol 1 (1) ◽  
pp. 38-50 ◽  
Author(s):  
Rachael Ruegg

Abstract For many years, people have considered lexis and grammar separately in the context of teaching and learning English. In the assessment of second language writing, lexis and grammar continue to be considered separately. However, recent corpus studies have questioned this approach and argued that lexis and grammar are fundamentally inseparable. While the assessment of lexis and grammar as two distinct qualities lends face validity to assessment criteria, the corpus literature suggests that raters may not be able to accurately distinguish the two. The current study examines the ability of raters to separate lexis and grammar when using an analytic rating scale to assess timed essays. In this experiment, the lexical content of 27 essays was manipulated before rating in order to determine the effect of lexical accuracy, lexical variation and lexical richness on lexis and grammar scores. From the results, it seems that raters are sensitive to lexical accuracy, but not lexical variation nor lexical richness. In addition, the manipulation of lexical qualities had a significant effect on grammar scores but not on lexis scores, supporting the idea that raters find it challenging to distinguish lexis from grammar


RELC Journal ◽  
2021 ◽  
pp. 003368822097737
Author(s):  
Joshua Matthews

This article explores how the analysis of inter-rater discourse can be used to support collective reflective practice in second language (L2) assessment. To demonstrate, a focused case of the discourse between two experienced language teachers as they negotiate assessment decisions on L2 written texts is presented. Of particular interest was the discourse surrounding the raters’ most divergent assessment decisions, which in this case were those relating to Task Achievement. Thematic analysis indicated that rater discourse predominantly focused on explicit objective factors, primarily the L2 texts and the rating scale; however, rater discourse also focused on more subjective, rater-centred factors. The discourse surrounding these rater-centred factors was often central to the identification and resolution of rating disagreements. The paper argues that the subjective dimension of language assessment needs to be more directly and systematically reflected upon in language teaching contexts and that analysis of rater discourse, especially discourse focused on points of disagreement between raters, provides a valuable mechanism to facilitate this.


2020 ◽  
Vol V (III) ◽  
pp. 142-150
Author(s):  
Hafeez Ullah ◽  
Muzammila Akram ◽  
Qurat-ul-ain Shams

This research paper draws attention to the literature for developing English speaking skills through instructional approaches of constructivism. It provides guideline for the teachers of Pakistan in teaching speaking skills. This research paper first analyzes constructivists model and then validates the implementation of the creative teacher teaching model in the speaking skills. Expectantly this model can contribute meaningfully to educate students in speaking English language. The five-point rating scale was utilized as research device for data collection. The three hundred and fifty-Eight (358) second language learners of Government and Private college students of District Muzaffargarh were selected randomly. The collected data was analyzed through SPSS. It was founded that constructivism paves the way for the learners in learning English as a second Language. Therefore, this research study suggested that the teachers should use constructivist approach in classroom rather than traditional approach


2019 ◽  
Vol 10 (1) ◽  
pp. 141
Author(s):  
Wenjun Zhong

By reviewing previous studies on pronunciation rating scale in second language pronunciation assessment, this article aims to summarize research gaps and weaknesses so as to contribute to the pronunciation rating scale research and development. Several research topics concerning construct, criterion, descriptor, scale length, scale format and scale users and suggestions with regard to participants, data collection methods and data analysis methods are provided for future research.


Sign in / Sign up

Export Citation Format

Share Document