Testing Proficiency in Writing a Foreign Language

Author(s):  
Robert Lado

Discussions of the testing of proficiency to write a foreign language are usually limited to techniques; and without a rationale or set criteria of what is to be tested, the result is confusion. Partly as a consequence of the lack of a rationale we are faced with a dearth of techniques in use. Essentially we find only two: objective short answer tests, which are distrusted, and composition tests, which are frustrating because of problems of scoring and the time involved.Superficial clichés are freely applied to these two techniques. Judgments are made on outward appearances — face validity — without reference to linguistic content or to empirically tested validity. On the basis of appearance, objective tests are criticized because presumably (1) they do not force the student to think, (2) they do not require that the student organize and present information, (3) they are only recognition, multiple choice tests, (4) they are considered elementary in comparison with the business of writing a free composition in the foreign language.

2019 ◽  
Vol 5 (1) ◽  
pp. 85-97
Author(s):  
George S. Ypsilandis ◽  
Anna Mouti

One among the main concerns of language testers in the design and implementation of tests is selecting the method of scoring for the tool used to perform the evaluation. This attribute indirectly reveals the tester’s ethical beliefs and personal stance on testing pedagogy. This is another study challenging the typical 1-0 method of scoring in Multiple Choice Tests (MCT) and implements, for experimental purposes, a simple polychotomous partial-credit scoring system on official tests administered for the National Foreign Language Exam System in Greece (NFLES-Gr). The study comes in support of earlier findings on the subject by the same authors in analogous smaller-scale studies. The MCT items chosen were completed by a total of 1,922 subjects in different levels of the NFLES-Gr test for Italian as an L2 in Greece. Results clearly indicate that the tested scoring procedure provides refined insights into students’ interlanguage levels, enhances sensitivity in scoring procedures, and may provide significant differences for testees found to be close to the pass/non-pass borderline without jeopardizing test reliability.


2007 ◽  
Vol 34 (4) ◽  
pp. 219-225 ◽  
Author(s):  
William R. Balch

Undergraduates studied the definitions of 16 psychology terms, expecting either a multiple-choice ( n = 132) or short-answer ( n = 122) test. All students then received the same multiple-choice test, requiring them to recognize the definitions as well as novel examples of the terms. Compared to students expecting a multiple-choice test, those expecting a short-answer test performed similarly on example questions but significantly better on definition questions. Students in these 2 test-expectation conditions also differed in several subjective ratings of their study and test taking. The results suggest that students do not typically study in an optimal way for multiple-choice tests.


1968 ◽  
Author(s):  
J. Brown Grier ◽  
Raymond Ditrichs

2009 ◽  
Author(s):  
Jeri L. Little ◽  
Elizabeth Ligon Bjork ◽  
Ashley Kees

1997 ◽  
Vol 74 (10) ◽  
pp. 1185 ◽  
Author(s):  
Gaspard T. Rizzuto ◽  
Fred Walters

Sign in / Sign up

Export Citation Format

Share Document