The differences among three-, four-, and five-option-item formats in the context of a high-stakes English-language listening test

2012 ◽  
Vol 30 (1) ◽  
pp. 99-123 ◽  
Author(s):  
HyeSun Lee ◽  
Paula Winke
2021 ◽  
Author(s):  
◽  
Diep Tran

<p>More than a decade ago, the Vietnamese Government announced an educational reform to enhance the quality of English language education in the country. An important aspect of this reform is the introduction of the localized test of English proficiency which covers four language skills, namely listening, speaking, reading, and writing. This high-stakes English test is developed and administered by only a limited number of institutions in Vietnam. Although the validity of the test is a considerable concern for test-takers and test score users, it has remained an under-researched area. This study aims to partly address the issue by validating a listening test developed by one of the authorized institutions in Vietnam. In this thesis, the test is referred to as the Locally Created Listening Test or the LCLT.  Using the argument-based approach to validation (Kane, 1992, 2013; Chapelle, 2008), this research aims to develop a validity argument for the evaluation, generalization and explanation inferences of the LCLT. Three studies were carried out to elicit evidence to support these inferences. The first study investigated the statistical characteristics of the LCLT test scores, focusing on the evaluation and generalization inference. The second study shed light on the extent to which test items engaged the target construct. The third study examined whether test-takers’ scores on the LCLT correlated well with their scores on an international English test that measured a similar construct. Both the second and third study were carried out to support the explanation inference.  These three studies did not provide enough evidence to successfully support the validity argument for the LCLT. The test was found to have major flaws that affected the validity of score interpretations. In light of the research findings, suggestions were given for the betterment of future LCLTs. At the same time, this research helped to uncover the impacts of certain text and task-related factors on the test-takers’ performance. Such insights led to practical implications for the assessment of second language listening in general. The results of this research also contributed to the theory and practice of test localization, a relatively new paradigm in language testing and assessment.</p>


2021 ◽  
Author(s):  
◽  
Diep Tran

<p>More than a decade ago, the Vietnamese Government announced an educational reform to enhance the quality of English language education in the country. An important aspect of this reform is the introduction of the localized test of English proficiency which covers four language skills, namely listening, speaking, reading, and writing. This high-stakes English test is developed and administered by only a limited number of institutions in Vietnam. Although the validity of the test is a considerable concern for test-takers and test score users, it has remained an under-researched area. This study aims to partly address the issue by validating a listening test developed by one of the authorized institutions in Vietnam. In this thesis, the test is referred to as the Locally Created Listening Test or the LCLT.  Using the argument-based approach to validation (Kane, 1992, 2013; Chapelle, 2008), this research aims to develop a validity argument for the evaluation, generalization and explanation inferences of the LCLT. Three studies were carried out to elicit evidence to support these inferences. The first study investigated the statistical characteristics of the LCLT test scores, focusing on the evaluation and generalization inference. The second study shed light on the extent to which test items engaged the target construct. The third study examined whether test-takers’ scores on the LCLT correlated well with their scores on an international English test that measured a similar construct. Both the second and third study were carried out to support the explanation inference.  These three studies did not provide enough evidence to successfully support the validity argument for the LCLT. The test was found to have major flaws that affected the validity of score interpretations. In light of the research findings, suggestions were given for the betterment of future LCLTs. At the same time, this research helped to uncover the impacts of certain text and task-related factors on the test-takers’ performance. Such insights led to practical implications for the assessment of second language listening in general. The results of this research also contributed to the theory and practice of test localization, a relatively new paradigm in language testing and assessment.</p>


2016 ◽  
Author(s):  
Josefina C. Santana ◽  
Arturo Garcca Santillln ◽  
Karen Michelle Ventura Michel ◽  
Teresa Zamora Lobato

RELC Journal ◽  
2021 ◽  
pp. 003368822097854
Author(s):  
Kevin Wai-Ho Yung

Literature has long been used as a tool for language teaching and learning. In the New Academic Structure in Hong Kong, it has become an important element in the senior secondary English language curriculum to promote communicative language teaching (CLT) with a process-oriented approach. However, as in many other English as a second or foreign language (ESL/EFL) contexts where high-stakes testing prevails, Hong Kong students are highly exam-oriented and expect teachers to teach to the test. Because there is no direct assessment on literature in the English language curriculum, many teachers find it challenging to balance CLT through literature and exam preparation. To address this issue, this article describes an innovation of teaching ESL through songs by ‘packaging’ it as exam practice to engage exam-oriented students in CLT. A series of activities derived from the song Seasons in the Sun was implemented in the ESL classrooms in a secondary school in Hong Kong. Based on the author’s observations and reflections informed by teachers’ and students’ comments, the students were first motivated, at least instrumentally, by the relevance of the activities to the listening paper in the public exam when they saw the similarities between the classroom tasks and past exam questions. Once the students felt motivated, they were more easily engaged in a variety of CLT activities, which encouraged the use of English for authentic and meaningful communication. This article offers pedagogical implications for ESL/EFL teachers to implement CLT through literature in exam-oriented contexts.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Hossein Bozorgian

Current English-as-a-second and foreign-language (ESL/EFL) research has encouraged to treat each communicative macroskill separately due to space constraint, but the interrelationship among these skills (listening, speaking, reading, and writing) is not paid due attention. This study attempts to examine first the existing relationship among the four dominant skills, second the potential impact of reading background on the overall language proficiency, and finally the relationship between listening and overall language proficiency as listening is considered an overlooked/passive skill in the pedagogy of the second/foreign language classroom. However, the literature in language learning has revealed that listening skill has salient importance in both first and second language learning. The purpose of this study is to investigate the role of each of four skills in EFL learning and their existing interrelationships in an EFL setting. The outcome of 701 Iranian applicants undertaking International English Language Testing System (IELTS) in Tehran demonstrates that all communicative macroskills have varied correlations from moderate (reading and writing) to high (listening and reading). The findings also show that the applicants’ reading history assisted them in better performing at high stakes tests, and what is more, listening skill was strongly correlated with the overall language proficiency.


2011 ◽  
Vol 28 (3) ◽  
pp. 367-382 ◽  
Author(s):  
Lorena Llosa

With the United States’ adoption of a standards-based approach to education, most attention has focused on the large-scale, high-stakes assessments intended to measure students’ mastery of standards for accountability purposes. Less attention has been paid to the role of standards-based assessments in the classroom. The purpose of this paper is to discuss key issues and challenges related to the use of standards-based classroom assessments to assess English language learners’ English proficiency. First, the paper describes a study of a standards-based classroom assessment of English proficiency in a large urban school district in California. Second, using this study as an example and drawing from the literature in language testing on classroom assessment, this paper highlights the major issues and challenges involved in using English proficiency standards as the basis for classroom assessment. Finally, the article outlines a research agenda for the field given current developments in the areas of English proficiency standards and classroom assessment.


2016 ◽  
Vol 32 (7) ◽  
pp. 936-968 ◽  
Author(s):  
Kendall King ◽  
Martha Bigelow

U.S. public schools are required to establish policies ensuring that English language learners have equal access to “meaningful education.” This demands that districts put into place mechanisms to determine student eligibility for specialized English language services. For the most states, this federal requirement is fulfilled through the local administration of the WIDA–Access Placement Test (W-APT), arguably the most widely used, yet under-studied, English language assessment in the country. Through intensive participant observation at one, urban new student intake center, and detailed qualitative, discursive analysis of test administration and interaction, we demonstrate how the W-APT works as a high-stakes assessment, screener, and sorter, and how test takers and test administrators locally negotiate this test and enact this federal and state policy. Our analysis indicates that the W-APT is problematic in several respects, most importantly because the test does not differentiate adequately across students with widely different literacy skills and formal schooling experiences.


2016 ◽  
Vol 40 (1) ◽  
pp. 41-53 ◽  
Author(s):  
Melissa K. Driver ◽  
Sarah R. Powell

Word problems are prevalent on high-stakes assessments, and success on word problems has implications for grade promotion and graduation. Unfortunately, English Language Learners (ELLs) continue to perform significantly below their native English-speaking peers on mathematics assessments featuring word problems. Little is known about the instructional needs and performance of ELLs at risk of mathematics difficulty (MD). In the present study, an exploratory quasi-experimental design was used to investigate word-problem instruction for ELLs in a culturally and linguistically diverse public elementary school. Specifically, we studied the efficacy of a word-problem intervention for ELLs with MD ( N = 9) that combined culturally and linguistically responsive practices with schema instruction (CLR-SI). The study is unique in that it combines research on effective instruction for ELLs and students with MD; CLR-SI has not been investigated for either ELLs or students with MD. Results have implications for teachers, administrators, and researchers of ELLs with MD.


Sign in / Sign up

Export Citation Format

Share Document