scholarly journals CELBAN: A 10-Year Retrospective Catherine Lewis & Blanche Kingdon

2016 ◽  
Vol 33 (2) ◽  
pp. 69 ◽  
Author(s):  
Catherine Lewis ◽  
Blanche Kingdon

This article provides a 10-year review by the test developers of the Canadian English Language Benchmark Assessment for Nurses (CELBAN™). From 2004 to 2014, the development, implementation, national administration, and operations of CELBAN and CELBAN-related products and services were the responsibility of the test developers and team at the Canadian English Language Assessment Services (CELAS) Centre at Red River College, Winnipeg, Manitoba. The CELAS Centre team experienced both challenges and opportunities during this 10-year period. As CELBAN expands, and in light of its current profile as a high stakes language assessment tool, a time for reflection and review is warranted. This retrospective review of CELBAN provides an overview of its history, administration, operations, and growth, as well as challenges experienced and lessons learned by the CELAS Centre team. Further research and development ideas are also posited by the CELBAN test developers. Cet article présente un examen décennal par les auteurs du CELBAN (Canadian English Language Benchmark Assessment for Nurses), l’évaluation de compétence linguistique pour infirmiers et infirmières. De 2004 à 2014, les auteurs du test et l’équipe au centre canadien des services d’évaluation de compétence linguistique en anglais (CELAS) situé au Red River College, à Winnipeg, au Manitoba, étaient responsables du développement, de la mise en œuvre, de l’administration à l’échelle nationale et des activités du CELBAN, ainsi que des produits et des services qui en découlent. Pendant ces dix ans, l’équipe du centre CELAS a affronté des défis et fait face à de nouvelles occasions. Compte tenu de la croissance du CELBAN et de son profil actuel comme outil d’évaluation linguistique à enjeux importants, une période de réflexion et de révision se justifie. Cet examen rétrospectif du CELBAN offre un aperçu de son histoire en évoquant son administration, ses activités, sa croissance, ainsi que les dé s affrontés et les leçons apprises par l’équipe du centre CELAS. Les auteurs du test proposent de nouvelles pistes de recherche et des idées de développement. 

2020 ◽  
Vol 23 (2) ◽  
pp. 96-117
Author(s):  
Stefanie Baldwin ◽  
Liying Cheng

This qualitative validation study examines sixteen Internationally Educated Nurses’ (IENs’) accounts of the Canadian English Language Benchmark Assessment for Nurses (CELBAN) at two testing centres (Toronto and Hamilton). This study adopts both focus groups and one-on-one interviews to investigate the inferences drawn from the test, and its consequences. Focus groups and interviews were conducted using an adapted interview guide utilized in the TOEFL iBT investigation of test-taker accounts of construct representation and construct irrelevant variance (DeLuca et al., 2013). While construct representation describes the degree of authenticity in the presentation of Canadian English language nursing tasks, construct irrelevant variance refers to potential factors impacting the test-taking experience which might contribute to a score variance that was not reflective of test-taker knowledge of the testing constructs (Messick, 1989, 1991, 1996). In this study, test-taker accounts of construct representation and construct irrelevant variance constituted the data which were coded and analyzed abductively via the sensitizing concepts derived from DeLuca et al., and Cheng and DeLuca (2011) on examining test-takers’ experience and their contribution to validity. Seven themes emerged, answering four research questions: How do IENs characterize their test experience? How do IENs describe the assessment constructs? What, if any, sources of Construct Irrelevant Variance (CIV) do IENs describe? Do IENs feel the language tasks are authentic? Overall, participants reported positive experiences with the CELBAN, while identifying some possible sources of CIV. Given the CELBAN’s widespread use for high-stakes decisions (a component of nursing certification and licensure), further research of IEN-test-taker responses to construct representation and construct irrelevant variance will remain critical to our understanding of the role of language competency testing for IENs. 


2015 ◽  
Vol 35 (12) ◽  
pp. 1142-1147 ◽  
Author(s):  
Paul J. Glew ◽  
Sharon P. Hillege ◽  
Yenna Salamonson ◽  
Kathleen Dixon ◽  
Anthony Good ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Md Shaiful Islam ◽  
Md Kamrul Hasan ◽  
Shahin Sultana ◽  
Abdul Karim ◽  
Mohammad Mosiur Rahman

AbstractThe achievement of curriculum goals and objectives, to a large extent, depends on how assessment methods are designed, implemented, monitored, and evaluated. English language learning in Bangladesh has miserably failed, and ineffective assessment methods may be largely attributed to this failure. This paper attempts to address various aspects and issues of English language assessment in Bangladesh in relation to English language learning as a curricular reform and the education policy of the country. The analysis revealed that there was always a gap between the principles of assessment embedded into the curriculum and the actual assessment practices. Furthermore, heavily hard hit by the high-stakes testing, the curriculum, the learners, and the instructors need to be liberated from this vicious policy. The review concluded with a recommendation that teachers need to develop assessment literacy through teacher education programs that are essential to helping teachers to acquire knowledge, skills, professionalism, and assessment expertise.


2020 ◽  
Vol 37 (4) ◽  
pp. 523-549
Author(s):  
You-Min Lin ◽  
Michelle Y. Chen

This study examined the writing score and writing feature changes of 562 repeat test takers who took the Canadian English Language Proficiency Index Program–General (CELPIP–General) test at least three times, with a short (30–40 day) interval between the first and second attempts and a longer (90–180 day) interval between the first and third attempts. Analysis was conducted to uncover whether changes occurred at different testing durations (short vs. long) and whether the observed changes varied across repeater’s initial proficiency groups (low, mid, high). The writing scores measured by CELPIP bands showed great stability over the 6-month period, but the trends of development differed by proficiency group. Low proficiency test takers were more likely to have faster observable score gains, compared to the medium proficiency group, whereas high proficiency repeaters may not maintain their score levels at later attempts. Writing quality was analyzed using natural language processing (NLP) tools. Results suggested that for all proficiency groups, lexical features were more likely to improve over the 6-month period, with some measures showing improvement at 1 month; features in cohesion and syntactic sophistication, however, did not change significantly.


2009 ◽  
Vol 29 ◽  
pp. 80-89 ◽  
Author(s):  
Alan Davies

English worldwide may be viewed in terms of spread and of diffusion. Spread refers to the use in different global contexts, such as publishing and examinations, of Standard British or American English. Diffusion describes the emergence of local varieties of English in, for example, India or Singapore, comparable to the earlier emergence of Australian English, Canadian English, and so on. In nonformal settings, interlocutors make use of their own local variety of English, their World Englishes (WEs). In formal settings, notably in English language assessment, it seems that the norm appealed to is still that of Standard British or American English. Since English as a lingua franca (ELF) appears to make use only of the spoken medium, there is less of a demand for an ELF written norm. At present what seems to hold back the use of local WEs norms in formal assessment is less the hegemony of Western postcolonial and economic power and more the uncertainty of local stakeholders.


2013 ◽  
Vol 14 (4) ◽  
pp. 95-101 ◽  
Author(s):  
Robert Kraemer ◽  
Allison Coltisor ◽  
Meesha Kalra ◽  
Megan Martinez ◽  
Bailey Savage ◽  
...  

English language learning (ELL) children suspected of having specific-language impairment (SLI) should be assessed using the same methods as monolingual English-speaking children born and raised in the United States. In an effort to reduce over- and under-identification of ELL children as SLI, speech-language pathologists (SLP) must employ nonbiased assessment practices. This article presents several evidence-based, nonstandarized assessment practices SLPs can implement in place of standardized tools. As the number of ELL children SLPs come in contact with increases, the need for well-trained and knowledgeable SLPs grows. The goal of the authors is to present several well-establish, evidence-based assessment methods for assessing ELL children suspected of SLI.


2016 ◽  
Author(s):  
Josefina C. Santana ◽  
Arturo Garcca Santillln ◽  
Karen Michelle Ventura Michel ◽  
Teresa Zamora Lobato

Sign in / Sign up

Export Citation Format

Share Document