Language testing

2004 ◽  
Vol 37 (1) ◽  
pp. 66-69

04–83Akiyama, Tomoyasu (U. Melbourne, Australia). Assessing speaking: issues in school-based assessment and the introduction of speaking tests into the Japanese senior high school entrance examination. JALT Journal (Tokyo, Japan), 25, 2 (2003), 117–141.04–84Chiang, Steve (Yuan Ze University, Taiwan). The importance of cohesive conditions to perceptions of writing quality at the early stages of foreign language learning. System (Oxford, UK), 31 (2003), 471–484.04–85Escamilla, Kathy, Mahon, Elizabeth, Riley-Bernal, Heather and Rutledge, David (U. of Colorado, Boulder, USA). High-stakes testing, Latinos, and English language learners: lessons from Colorado. Bilingual Research Journal (Arizona, USA), 27, 1 (2003), 25–49.04–86Gorsuch, Greta (Texas Tech U., USA; Email: [email protected]). Test takers' experiences with computer-administered listening comprehension tests: interviewing for qualitative explorations of test validity. Calico Journal (Texas, USA), 21, 2 (2004), 339–371.04–87Hardcastle, Peter.How to not test language (Part 2). Language Testing Update (Lancaster, UK), 33 (2003), 28–35.04–88Hemard, D. and Cushion, S. (London Metropolitan, University, UK; Email: [email protected]). Design and evaluation of an online test: assessment conceived as a complementary CALL tool. Computer Assisted Language Learning (Lisse, The Netherlands), 16, 2–3 (2003), 119–139.04–89Ishii, David N. and Baba, Kyoko (U. of Toronto, Canada; Email: [email protected]). Locally developed oral skills evaluation in ESL/EFL classrooms: a checklist for developing meaningful assessment procedures. TESL Canada Journal/Revue TESL du Canada (Burnaby, Canada), 21, 1 (2003), 79–96.04–90Iwashita, Noriko and Grove, Elizabeth (University of Melbourne, Australia). A comparison of analytic and holistic scales in the context of a specific-purpose speaking test. Prospect (Sydney, Australia), 18, 3 (2003), 25–35.04–91Lee, Yong-Won (Educational Testing Service, Princeton, NJ, US; Email: [email protected]). Examining passage-related local item dependence (LID) and measurement construct using Q3statistics in an EFL reading comprehension test. Language Testing (London, UK), 21, 1 (2004), 74–100.04–92Qian, David D. (Hong Kong Polytechnic U., Hong Kong; Email: [email protected]) and Schedl, Mary (Educational Testing Service, Princeton, NJ, US). Evaluation of an in-depth vocabulary knowledge measure for assessing reading performance. Language Testing (London, UK), 21, 1 (2004), 28–52.04–93Rea-Dickins, Pauline (University of Bristol, UK). Classroom assessment of English as an additonal language: Key stage 1 contexts – summary of research findings. Language Testing Update (Lancaster, UK), 33 (2003), 48–53.04–94Rodgers, Catherine, Meara, Paul and Jacobs, Gabriel (U. of Wales Swansea, UK). Factors affecting the standardisation of translation examinations. Language Learning Journal (London, UK), 28 (Winter 2003), 49–54.

1985 ◽  
Vol 55 (2) ◽  
pp. 195-220 ◽  
Author(s):  
James Crouse

The College Entrance Examination Board and the Educational Testing Service claim that the Scholastic Aptitude Test (SAT) improves colleges' predictions of their applicants' success. James Crouse uses data from the National Longitudinal Study of high school students to calculate the actual improvement in freshman grade point averages, college completion,and total years of schooling resulting from colleges' use of the SAT. He then compares those predictions with predictions based on applicants' high school rank. Crouse argues that the College Board and the Educational Testing Service have yet to demonstrate that the high costs of the SAT are justified by its limited ability to predict student performance.


2005 ◽  
Vol 38 (2) ◽  
pp. 91-93

05–178Carrel, Patricia L. (Southern Illinois U, USA; [email protected]), Dunkel, Patricia A. (Georgia State U, USA; [email protected]) & Mollaun, Pamela (Educational Testing Service, USA; [email protected]), The effects of notetaking, lecture length and topic on a computer-based test of ESL listening comprehension. Applied Language Learning (Monterey, CA, USA) 14.1 (2004), 83–105.05–179Cheng, Hsiao-fang (National United U, Taiwan, China), A comparison of multiple-choice and open-ended response formats for the assessment of listening proficiency in English. Foreign Language Annals (Alexandria, VA, USA) 37.4 (2004), 544–555.05–180Grindsted, Annette (U of Southern Denmark, Denmark; [email protected]), Interactive resources used in semi-structured research interviewing. Journal of Pragmatics (Amsterdam, the Netherlands) 37.7 (2005), 1015–1035.05–181Huempfner, Lisa (Illinois State U, USA), Can one size fit all? The imperfect assumptions of parallel achievement tests for bilingual students. Bilingual Research Journal (Tempe, AZ, USA) 28.3), 379–399.05–182Kondo-Brown, Kimi (U of Hawaii at Manoa, USA), Investigating interviewer–candidate interactions during oral interviews for child L2 learners. Foreign Language Annals (Alexandria, VA, USA) 37.4 (2004), 602–615.05–183Lokai Bischof, Deborah (Educational Testing Service, USA), Baum, David I., Casabianca, Jodi, M., Morgan, Rick, Rabiteau, Kathleen A. & Tateneni, Krishna, Validating AP modern foreign language examinations through college comparability studies. Foreign Language Annals (Alexandria, VA, USA) 37.4 (2004), 616–622.05–184Mathews, Thomas J. & Hansen, Cheryl M. (Weber State U, USA), Ongoing assessment of a university foreign language program. Foreign Language Annals (Alexandria, VA, USA) 37.4 (2004), 521–533.05–185Milton, James (U of Wales Swansea, UK; [email protected]), Comparing the lexical difficulty of French reading comprehension exam texts. Language Learning Journal (Rugby, UK) 30 (2004), 5–11.05–186Shultz, Deborah, L (Mechanicsburg Middle School, USA) & Willard-Holt, Colleen, Promoting world languages in middle school: the achievement connection. Foreign Language Annals (Alexandria, VA, USA) 37.4 (2004), 623–629.05–187Tan, Kelvin (Temasek Polytechnic, Singapore), Does student self-assessment empower or discipline students?Assessment & Evaluation in Higher Education (Abingdon, UK) 29.6 (2004), 651–662.


2010 ◽  
Vol 27 (3) ◽  
pp. 335-353 ◽  
Author(s):  
Sara Cushing Weigle

Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study approaches validity by comparing human and automated scores on responses to TOEFL® iBT Independent writing tasks with several non-test indicators of writing ability: student self-assessment, instructor assessment, and independent ratings of non-test writing samples. Automated scores were produced using e-rater ®, developed by Educational Testing Service (ETS). Correlations between both human and e-rater scores and non-test indicators were moderate but consistent, providing criterion-related validity evidence for the use of e-rater along with human scores. The implications of the findings for the validity of automated scores are discussed.


2016 ◽  
Vol 11 (1) ◽  
pp. 02 ◽  
Author(s):  
Tutku Basöz ◽  
Dilek Tüfekci Can

Semiotics in foreign language learning has recently achieved some prominence as a theoretical foundation for language teaching and learning. Although there have been a number of research on the semiotics in foreign language learning, the practical use of semiotics in preschool classroom environment still remains unanswered. What is more, the effectiveness of computers on vocabulary learning among preschool children is still an obscure area, which attracts the attentions of researchers, scholars and practitioners. Thus, the present study aims to investigate whether there is a significant difference in preschool children’s vocabulary gain depending upon the computer assisted vocabulary instruction or the traditional vocabulary instruction both adopting a semiotic approach. The sample group of the study included 35 preschool children (5-years) who are studying at Balıkesir University Necatibey Faculty of Education Kindergarten. In this quasi-experimental study, the children were assigned to computer assisted vocabulary instruction group (16) or traditional vocabulary instruction group (19), which were the experimental and control groups. Before the experiment, the children were given a pre-test, which measured the number of target vocabulary the children had already known. While the experimental group learned the target vocabulary through computer-assisted instruction, the control group was taught the same target vocabulary via traditional vocabulary instruction. After the experiment, the same test was given to the children as the post-test. The results of the study showed that both instruction types were successful in teaching vocabulary and there was no significant difference between the groups in terms of their vocabulary gain. Keywords: Vocabulary learning; semiotic approach; computer assisted vocabulary instruction; preschool children; foreign language learning 


2017 ◽  
Vol 2 (1) ◽  
pp. 273
Author(s):  
Muzakki Bashori

The integration of computer in the service of FL (Foreign Language) learning is expected to be inevitable in the future. It is seemingly due to (a) its considerable affordances for EFL (English as a Foreign Language) learners, (b) the characteristics of today�s learners as the Generation Z (Gonz�lez-Lloret & Ortega, 2014), and (c) the widespread use of the internet in the 21st century. This situation then leads to transforming CALL (Computer-Assisted Language Learning) into WFLL (Web-Facilitated Language Learning) as an alternative paradigm for EFL teachers and learners. Furthermore, TBLT (Task-Based Language Teaching) is likely to serve as a pedagogical framework in designing the Web for the purpose of FL learning. The present study was therefore mainly aimed at (a) developing a particularly teacher-designed learning website, namely I Love Indonesia, and (b) investigating high school learners of English in Indonesia with different attitudes towards CALL (positive/moderate/negative) in correlation with how they perceive WFLL (agree/disagree) and perform web-based activities. Descriptive Statistics, IF Function in Excel, Correlation Analysis, and Independent-samples t-test were employed in the study. Finally, the findings of the study showed that (a) the website seems to benefit EFL learners in some certain aspects, and (b) positive attitude learners are likely to perceive more positively (agree) than moderate and negative attitude learners (disagree) on the use of the website for the purpose of language learning. A greater number of learners over a longer period of time should be taken into account when conducting further studies on the effectiveness of the website for EFL learners in order to be able to shed some light on learners� language development.�Keyword: attitude, perception, task-based language teaching, web-facilitated language learning


2016 ◽  
Vol 5 (4) ◽  
pp. 190
Author(s):  
Esim Gursoy ◽  
Tuba Arman

<p>With the increasing need to learn languages as a result of globalization there is a great demand on the part of the learners to communicate in a second/foreign language, which is also supported downwards by the governments and upwards by the parents. Among the many aspects of foreign language learning, affective factors are researched a lot as they are dependent on contexts, individual differences, cultural background, teaching methodology etc., which cause a variation in the results. The current research focuses on test anxiety as one of the major affective factors. Thus it aims to identify the level of test anxiety and its relationship with gender, grade level, and academic achievement. Moreover, the causes of test anxiety were investigated according to students’ own perceptions. A test anxiety scale and semi-structured interviews were conducted to gather the qualitative and quantitative data. The overall results showed that the participants had a moderate level of test anxiety. Females were found to be more anxious than males only in some aspects; low achievement scores provoked test anxiety with regard to a few items, and 9<sup>th</sup> graders were found to be more anxious than the 10<sup>th</sup> graders. According to participants’ own perceptions, test validity, time limit, teacher attitudes, test techniques, proctors, length of the test, testing environment and clarity of test instructions were the causes of test anxiety.</p>


Sign in / Sign up

Export Citation Format

Share Document