scholarly journals Test Review: Current options in at-home language proficiency tests for making high-stakes decisions

2020 ◽  
Vol 37 (4) ◽  
pp. 600-619 ◽  
Author(s):  
Daniel R. Isbell ◽  
Benjamin Kremmel

Administration of high-stakes language proficiency tests has been disrupted in many parts of the world as a result of the 2019 novel coronavirus pandemic. Institutions that rely on test scores have been forced to adapt, and in many cases this means using scores from a different test, or a new online version of an existing test, that can be taken at home. The switch to accepting at-home proficiency tests for high-stakes decisions raises many concerns for stakeholders, such as technological demands, exam security, and validity of score use. Along these lines, this thematic review addresses such concerns and features brief reviews of seven options in at-home proficiency testing: ACTFL Assessments, Duolingo English Test, IELTS Indicator, LanguageCert, TEF Express, TOEFL iBT Special Home Edition, and Versant. Considering at-home testing more broadly, we discuss key considerations for selecting an at-home test. We close with speculation on how at-home tests may shape language testing going forward: Beyond adapting to the current pandemic, at-home testing might address longstanding issues in access to language testing services and the representation of real-world communication practices in language tests.

Author(s):  
Rifat Kamasak ◽  
Mustafa Ozbilgin ◽  
Ali Rıza Esmen

There is a growing trend in using high stakes standardised test scores to evaluate individuals' academic and professional language proficiency. Although these tests determine the fates of millions of students and job seekers across the world, several aspects of these tests such as their design, ethical implementation, procedural fairness, and validity and reliability are questioned by many linguists. This chapter aims to evaluate the mostly criticised social and technical aspects of high stakes language tests from a pyramid scheme perspective. In order to achieve this aim, a number of empirical studies from the extant literature are reviewed, and some comments are provided in the conclusion section.


2019 ◽  
Vol 10 (4) ◽  
pp. 166
Author(s):  
Aynur Ismayilli Karakoc

Different theoretical and empirical taxonomies of reading and listening comprehension (RC, LC) are available in the literature. Most of language tests and tasks in English as foreign or second language (EFL/ESL) coursebooks are based on the classifications of reading and listening subskills (micro-skills) offered in theory. However, these taxonomies have not cross-checked whether the theoretical subskills are practiced in ESL/EFL coursebooks and assessed in proficiency tests. Nor have they listed the shared and exclusive RC and LC subskills in a unified fashion. For this purpose, theoretical subskills offered in Applied Linguistics literature, nine internationally popular EFL/ESL proficiency tests and 25 widely used coursebook tasks were collected, cross-compared, repetitions were eliminated, and a final inclusive list of common and exclusive subskills was prepared. The findings suggested ten common reading and listening subskills. The subskills exclusive to reading were seven, and exclusive to listening were four. This list is hoped to be helpful for teachers in developing their own tests, and the coursebook developers in preparing the content materials.


2008 ◽  
Vol 78 (2) ◽  
pp. 260-329 ◽  
Author(s):  
Ronald W. Solórzano

This article discusses the issues and implications of high stakes tests on English language learners (ELLs). As ELLs are being included in all high stakes assessments tied to accountability efforts (e.g., No Child Left Behind), it is crucial that issues related to the tests be critically evaluated relative to their use. In this case, academic achievement tests are analyzed relative to their norming samples and validity to determine their usefulness to ELLs. Also, commonly used language proficiency tests are examined relative to definitions of proficiency, technical quality, alignment with criteria for language classification and reclassification, and their academic predictive validity. Based on the synthesis of the literature, the author concludes that high stakes tests as currently constructed are inappropriate for ELLs, and most disturbing is their continued use for high stakes decisions that have adverse consequences. The author provides recommendations for addressing the issues related to high stakes tests and ELLs.


2019 ◽  
Vol 35 (1) ◽  
Author(s):  
Dinh Minh Thu

Validity in language testing and assessment has its long fundamental role in research along with reliability (Bachman & Palmer, 1996). This paper analyses basic theories and empirical research on language test validity in order to provide the notion, the classification of language test validity, the validation working frames and the trends of empirical research. Four key findings come out from the analysis. Firstly, language test validity refers to an evaluative judgment of the language test quality on the ground of evidence of the integrated components of test content, criterion and consequences through the interpretation of the meaning and utility of test scores. Secondly, construct validity is a dominating term in modern validity classification. The chronic division of construct validity into prior and post ones can help researchers have a clearer validation option. Plus, test validation can be grounded in light of Messick (1989), Bachman (1996) and Weir (2005). Finally, almost all empirical research on test validity the researcher has addressed concerns international and national high-stakes proficiency tests. The research results open gaps in test validation research for the future.


2021 ◽  
Vol 15 (Supplement_1) ◽  
pp. S496-S497
Author(s):  
D Edwards ◽  
M Ibrahim ◽  
R Cooney ◽  
R Boulton

Abstract Background Faecal calprotectin (FC) testing has become a standard non-invasive tool to monitor disease control in Inflammatory Bowel Disease (IBD)(1). Reported patient compliance with submitting samples for hospital testing has been as low as 35% (2). We aimed to evaluate patient compliance with rapid home faecal calprotectin testing kits compared to hospital based testing in our university teaching hospital. Methods 100 patients with a diagnosis of IBD for at least 1 year and attended IBD clinic between January 2019 and August 2020 were selected. Our laboratory ceased performing FC testing in late March and we introduced home testing (BÜHLMANN IBD doc). 50 patients who were, pre-pandemic, requested to bring a stool sample to the laboratory for hospital-based ELISA testing were randomly selected. We compared these to 50 random patients who had a home-based FC testing. Patients who were supplied with home testing kits received training from IBD nurses as well as on-line training materials. Data was collated retrospectively. Compliance was recorded if result was documented within 6 weeks of request. Results Prior to the introduction of home testing, only 52% of the patients’ sampled complied with hospital-based testing. This compared to a 70% compliance rate, when home testing was requested (Figure 1). Figure 1. Comparison of percentage of compliance between hospital and home faecal calprotectin test request. Conclusion The improvement in FC testing compliance with rapid home testing kit compared to laboratory based testing illustrates the benefit of adapting home testing as the standard in future. The considerable increase in compliance by home testing may be due less disruption to patient’s personal life i.e., ability to undergo testing at home, symptoms such as faecal incontinence preventing patients delivering samples to hospital and COVID pandemic compelling patients to stay at home. Adopting rapid FC home testing as standard provides patients with increased locus of control regarding their care, providing health care professionals with rapid results, thus, will improve management of IBD. The ability for patients to perform home test has obvious advantages during the COVID pandemic. References


Author(s):  
Talip Karanfil ◽  
Steve Neufeld

High-stakes and high-volume English language proficiency tests typically rely on multiple-choice questions (MCQs) to assess reading and listening skills. Due to the Covid-19 pandemic, more institutions are using MCQs via online assessment platforms, which facilitate shuffling the order of options within test items to minimize cheating. There is scant research on the role that order and sequence of options plays in MCQs, so this study examined the results of a paper-based, high-stakes English proficiency test administered in two versions. Each version had identical three-option MCQs but with different ordering of options. The test-takers were chosen to ensure a very similar profile of language ability and level for the groups who took the two versions. The findings indicate that one in four questions exhibited significantly different levels of difficulty and discrimination between the two versions. The study identifies order dominance and sequence priming as two factors that influence the outcomes of MCQs, both of which can accentuate or diminish the power of attraction of the correct and incorrect options. These factors should be carefully considered when designing MCQs in high-stakes language proficiency tests and shuffling of options in either paper-based or computer-based testing.


2021 ◽  
Vol 59 (2) ◽  
pp. 99-113
Author(s):  
Elke Gilin ◽  
Jordi Heeren ◽  
Lieve De Wachter

Abstract High stakes university entrance language tests for L2-speakers are assumed to measure the language proficiency needed for academic success. Few studies have investigated the claim that L1-speakers automatically have a sufficient language proficiency level and consequently do not need to be tested. In Flanders, Belgium, there are no real entry requirements or tests for L1-speakers, except for Dentistry and Medicine. In that respect, it is interesting to investigate how Flemish secondary school students score on a language test designed for L2-students. This study focuses on the results of a small-scale study carried out with 50 pupils of the regular Flemish schooling system. All pupils took the computer test of the Interuniversity test Dutch as a Foreign Language (ITNA) testing reading, listening and language in use, one of the two officially recognized university entrance tests in Flanders. Interestingly, not all of the pupils pass the test. Especially pupils with a multilingual background or from a lower socio-economic background seem to have more difficulties passing the threshold.SamenvattingVoordat L2-sprekers zich kunnen inschrijven in het hoger onderwijs, moeten ze een taalvaardigheidstoets afleggen. Er wordt vanuit gegaan dat zo’n toelatingstest het taalniveau meet dat noodzakelijk is voor academisch succes. Weinig studies onderzochten echter de bewering dat L1-sprekers automatisch over het gevraagde taalvaardigheidsniveau beschikken. Zij hoeven immers geen test af te leggen. In Vlaanderen, België, zijn er geen verplichte of bindende universitaire toelatingstoetsen voor L1-sprekers, behalve de toelatingsproeven bij geneeskunde of tandheelkunde. In dat opzicht is het interessant om na te gaan hoe Vlaamse leerlingen uit het secundair onderwijs scoren op een taalvaardigheidstest voor anderstaligen. Deze studie richt zich op de resultaten van een kleinschalig onderzoek met vijftig laatstejaarsscholieren uit het reguliere Vlaamse onderwijssysteem. Alle scholieren legden de computertest van de Interuniversitaire Taaltest Nederlands voor Anderstaligen (ITNA) af, wat een van de twee officieel erkende universitaire toelatingstesten in Vlaanderen is. Niet alle scholieren legden de ITNA-computertest succesvol af. Vooral leerlingen met een meertalige achtergrond of met een lage socio-economische status bleken meer moeite te hebben om de cesuur te behalen.


Author(s):  
Margherita Pelleriti

This paper will focus on the delicate issue of assessing the language proficiency of dyslexic students in a foreign language, namely English. These learners are usually considered a sub-group of test takers because of their specific learning differences. An overview of dyslexia will be presented, shedding light on the difficulties encountered by dyslexic students during their learning process. Some of the accommodations used during the learning process will be illustrated, along with the accommodations and/or modifications allowed during language testing. Attention will also be paid to fairness and validity represented by accommodations. Moreover, the special requirements allowed by international examination boards during their high-stakes tests will be analysed. Finally, this paper will illustrate what the Italian Law takes into account for dyslexic students and how it is applied at the University of Modena and Reggio Emilia, Italy.Keywords: dyslexia; SpLDs; language testing; learning differences; accommodations; testing validity.


Sign in / Sign up

Export Citation Format

Share Document