Learner Identity, Learner Agency, and the Assessment of Language Proficiency: Some Reflections Prompted by the Common European Framework of Reference for Languages

2015 ◽  
Vol 35 ◽  
pp. 120-139 ◽  
Author(s):  
David Little ◽  
Gudrun Erickson

ABSTRACTThis article starts from the assumption that education is a process of “people shaping” designed to help learners extend and perhaps in some ways modify their identity while exploiting and developing their agency. This view is harmonious with the approach to language education that the Council of Europe has developed since the 1970s, and especially with its early commitment to learner autonomy and self-assessment. The approach adopted by the Common European Framework of Reference for Languages: Learning, teaching, assessment (CEFR) to the description of language proficiency clearly implicates the user-learner's identity and agency, which are also central to the CEFR's companion piece, the European Language Portfolio (ELP), in which self-assessment plays a key role. The article proposes that taken together, the CEFR and the ELP imply an assessment culture in which learning and assessment are reciprocally integrated. From the perspective thus established, the authors review some current trends in language assessment and their potential impact on learner identity and learner agency, focusing in turn on self-assessment, peer assessment, teacher assessment, and large-scale testing and assessment. The article concludes by arguing that although recent developments in language assessment pay significantly more attention to the learner than was previously the case, a great deal of work remains to be done to further increase the engagement of learner agency in processes of self-assessment and peer assessment and to align them with other forms of assessment.

Author(s):  
Iryna Perishko

The article deals with teachers‘ use of language assessment to guide students‘ language proficiency development and academic achievement, the positive benefits of formative assessment for guiding teaching and learning and its characteristics. It is specially noted that language assessment is a purposeful activity that gathers information about students‘ language development. Assessment can be intended to improve teaching and learning or to evaluate the outcomes of teaching and learning. Special attention is given to formative assessment that is described as assessment for learning, in contrast to assessment of learning, i.e. summative assessment. The article focuses on the analysis of formative assessment and its procedures in English classes such as questioning, quizzes, discussions, interviews, role plays, observations, teacher-made tests, checklists, self-reports, journals, projects. Various types of formative assessment, namely self-assessment, peer assessment and alternative assessment are highlighted in the paper. The characteristics of teacher-based assessment that distinguish it from other forms of assessment are described. Teachers assess their students’ learning to determine the effectiveness of their teaching. It should be emphasized that the quality of formative assessment depends on its beneficial uses and value for teaching and learning and teachers‘ judgments and classroom uses of assessments have profound effects on the lives and opportunities of students.


XLinguae ◽  
2020 ◽  
pp. 91-107
Author(s):  
Hussein Meihami ◽  
Rajab Esfandiari

Self-assessment and peer-assessment, as two alternative assessment procedures, have appealed to researchers in recent years and motivated L2 researchers to examine these two techniques. However, most of the studies have used them for summative purposes, and the formative dimension these two methods can have for learning has been neglected. This study was an attempt to find how they contributed to learning gains. To that end, sixty Iranian male and female intermediate language learners at a language institute were randomly assigned into three treatment conditions: Selfassessment and peer-assessment as experimental groups and teacher assessment as a control group. A language proficiency test was used to homogenize language learners, and a posttest was administered to measure the amount of gain language learners achieved after treatment sessions. We analyzed the test data using descriptive and inferential statistics as implemented in SPSS, a general-purpose computer program for data analysis. Results from a one-way analysis of variance showed statistically significant differences between the score means of three treatment groups. Post-hoc analyses revealed that language learners in the peer-assessment group outperformed those in the other two groups. The findings suggest that peer-assessment as a cooperative technique can be used in language classes to help students improve their writing abilities.


2017 ◽  
Vol 10 (1) ◽  
pp. 150-174
Author(s):  
Enikő Öveges

Summary Hungary has witnessed several major attempts to improve the foreign language proficiency of students in primary and secondary school education since the political changes of the 1990s, as both international and national surveys reflect a dramatically low ratio of Hungarian population that self-reports to communicate in any foreign language at any level. Among other initiatives, a major one to boost students’ foreign language competence has been the Year of Intensive Language Learning (YILL), introduced in 2004, which allows secondary schools to integrate an extra school year when the majority of the contact hours are devoted to foreign languages. The major objectives of YILL are as follows: 1) to offer a state-financed and school-based alternative to the widely spread profit-oriented private language tuition; thus 2) granting access to intensive language learning and 3) enhancing equal opportunities; and as a result of the supporting measures, 4) to improve school language education in general. YILL is exemplary in its being monitored from the launch of the first classes to the end of their five-year studies, involving three large-scale, mixed-method surveys and numerous smaller studies. Despite all the measures to assist the planning and the implementation, however, the program does not appear to be an obvious success. The paper introduces the background, reviews and synthesizes the related studies and surveys in order to evaluate the program, and argues that with more considerate planning, the YILL ‘hungaricum’ would yield significantly more benefits.


2009 ◽  
Vol 29 ◽  
pp. 145-167 ◽  
Author(s):  
Liz Hamp-Lyons ◽  
Jane Lockwood

Workplace language assessment poses special issues for language testers, but also, when it becomes very large scale, it poses issues for language policy. This article looks at these issues, focusing on the offshore and outsourcing (O&O) industry as it is transitioning from native-speaking (NS) countries into nonnative-speaking (NNS) destinations such as India and the Philippines. This is obviously most impacted in call centers, where the ability of customer service representatives (CSRs) to communicate with ease with their native-English speaking customers is central to business success and can be key to a nation's economy. Having reviewed the (limited) research in this area, we take the Philippines as our example to explore how government, academe, and the business sector are dealing with the language proficiency and personnel-training issues caused by the exponential growth in this industry. Appropriate language assessments that are practical, while also being valid and reliable, are critical if the Philippines is to retain its position in this emerging market. Currently, call centers in Philippines complain of very poor recruitment rates due to poor language ability, and of poor quality communication outcomes measures: But how do they assess these key areas? We describe and evaluate the current situation in call center language assessment in the Philippines and discuss possible ways forward, for the Philippines and for the O&O industry more broadly.


2020 ◽  
Vol 3 (2) ◽  
pp. 63-68
Author(s):  
Melyastuti Wulandari ◽  
Siti Sriyati ◽  
Widi Purwianingsih

The implementation of peer and self assessment has become one of the alternative in doing the product of performance assessment. The research aims to describe student’s ability using peer dan self assessment as standard performance assessment on regulation system in senior high school students. Hopefully, the peer assessment can refer to the peer and self assessment. The research applied descriptive method which involved 25 student of the XI grade senior high school. The research instruments were research were the implementation research form, online form and rubric peer assessment, online self-assessment and student response questionnaire and teacher assessment. Peer assessment was implemented by students in groups and compared to teacher’s assessment. The result showed that the implementation of peer and self assessment was great. The student’s ability in doing peer and self assessment was great too, which means that peer and self assessment can be a standard of performance assessment. The comparation of the students’ assessment and teacher assessment show the similarity with percentage 84%. The type of feedback that many students gave was C1 type (Direction correction). Student respons of implementation peer and self assessment was great and they felt helpful by implementation of peer and self assessment.


2016 ◽  
Vol 6 (1) ◽  
pp. 136 ◽  
Author(s):  
Didem Kilic

This study focuses on the process of implementing self-, peer- and teacher-assessment in teacher education in order to examine the ways of applying these assessment practices and specifically aims at finding out the level of agreement among pre-service teachers’ self-, peer- and teacher-assessments of presentation performances. Pre-service teachers’ presentation performances including an application of a teaching method assessed by peers and teacher and also by themselves through criteria based assessment forms. The analysis of the data revealed that there are statistically significant differences among self-, peer- and teacher-assessment scores. Peer-assessment of pre-service teachers’ presentations is found to be significantly higher compared with teacher-assessment and self-assessment. With regard to the comparison of teacher-assessment scores and self-assessment scores, it is revealed that there are no significant differences between teacher- and self-assessments. In teacher training programmes beside summative approach self-, peer- and teacher-assessments can be implemented in a formative way as useful practices in developing more succesful performance, higher confidence, effective presenting skills and essential competencies required for effective teaching.


2013 ◽  
Vol 37 (5) ◽  
pp. 3 ◽  
Author(s):  
Judith Runnels

The Common European Framework of Reference-Japan (CEFR-J), like its original counterpart, the CEFR, uses illustrative descriptors (can-do statements) that describe communicative competencies to measure learner proficiency and progress. Language learners are leveled in a CEFR-J category according to achievement on can-do statements gauged by self-assessment, an external rater (such as a teacher), or from external test scores. The CEFR-J, unlike the CEFR, currently lacks widely-available benchmarked performance samples for measuring student language proficiency, leaving administrations or teachers to estimate CEFR-J ability from test scores or from interactions with students. The current analysis measured ability scores from students and teachers on CEFR-J can-do statement achievement, comparing them to scores on an in-house designed placement test. Students’ self-assessment ratings did not correlate with their test scores, teachers varied in severity when making ability estimates for the same students, and no consistent response patterning between students and teachers was found. The results highlight that norming raters, controlling for severity, and training students on self-assessment are likely all required if the CEFR-J is to be used for measuring language learning progress, especially until established guidelines for estimating ability are available for the CEFR-J. The limitations of using the CEFR-J as an assessment tool and the assumption that teachers can accurately estimate student ability are discussed. ヨーロッパ言語共通参照枠(CEFR)をベースに構築されたCEFR-Japan(CEFR-J)は、学習者の到達度と伸びを測ることを目的に日本の教育機関で最近採用されるようになったシステムである。CEFR-Jは、その基となった枠組みと同様に、段階的に上がる難易度を基にしたコミュニケーション能力を説明するdescriptor(can-doという能力記述文:can-do statements)により構成されている。言語学習者はこのdescriptorの到達度によってレベル分けされる。この評価は、学習者の自己評価、教師などの他の評価者による評価、外部試験の結果から導き出されるものである。これらの評価により、学習者のCEFR-Jにおけるレベルが分かり、標準的にできるであろうとされる能力が示されることになるが、それを使用する人や教師次第になっている部分もある。そこで、もしこのようなシステムを利用する目的が評価レベルの標準化ということであるなら、学習者、教師、そしてテスト評価の判断の間に高い一貫性が保たれなければならない。本論での分析は、CEFR-Jのdescriptorについての学生と教師の能力判断の一貫性、そしてその判断が学内作成のプレイスメントテストの点数と一致するかを検証することを目的としている。学生と教師の判断には顕著な関係はみられず、学生の自己評価の結果はテストの点数と相関性がなかった。この結果により、もしCEFR-J が評価の標準化を目的に使用されるのであれば、規範的な評価者と自己評価についての学生指導の必要性が重要になるといえる。評価のツールとしてCEFR-Jを使うことの限界、及び説明的なdescriptorのシステムに本来備わるcan-do熟達度という概念に関する問題を議論する。


Sign in / Sign up

Export Citation Format

Share Document