Supplementary methods for assessing student performance on a standardized test in elementary algebra

Author(s):  
Alvin Baranchik ◽  
Barry Cherkas
1991 ◽  
Vol 19 (3) ◽  
pp. 18-25 ◽  
Author(s):  
James N. Wetzel ◽  
Dennis M. O'Toole ◽  
Edward L. Millner

1974 ◽  
Vol 6 (4) ◽  
pp. 353-366
Author(s):  
Eugene Jongsma

A random sample of passages was drawn from standardized reading comprehension tests for fourth grade students. The number and types of language patterns found in the test passages were determined through a method of linguistic analysis. The patterns identified on the tests did not reflect the patterns used most frequently in the oral language of fourth grade children. When the test passages were rewritten using a larger percentage of high frequency oral language patterns, and administered to comparable groups of students, no significant difference in comprehension performance was observed between those students taking the revised test and those taking the intact standardized test passages.


2005 ◽  
Vol 13 ◽  
pp. 36
Author(s):  
Laurence A. Toenjes

A paper appearing in this journal by Klein, Hamilton, McCaffrey and Stecher (2000) attempted to raise serious questions about the validity of the gains in student performance as measured by Texas' standardized test, the Texas Assessment of Academic Skills (TAAS). Part of their analysis was based on the results of three tests which they administered to 2,000 fifth grade students in 20 Texas schools. Although Klein et al. indicated that the 20 schools were not selected in a way which would insure that they were representative of the nearly 3,000 Texas schools that enrolled fifth graders, generalizations based upon the results for those schools were nonetheless offered. The purpose of this short paper is to demonstrate just how unrepresentative the 20 schools used by Klein et al. actually were, and in so doing to cast doubt on certain of their conclusions.


Author(s):  
Richard Griffin ◽  
Courtney Svec ◽  
Rita Caso ◽  
Jeff Froyd

Since 1988, with support from the Foundation Coalition, one of the Engineering Education Coalitions supported by the National Science Foundation, the Dwight Look College of Engineering has invested considerable time and energy in renewing its sophomore engineering courses. The excitement which accompanies the receipt of a large NSF funded program results in an initial enthusiasm and energy that is contagious for both faculty and students. The initial results of a “pilot” program are almost always improved course content, better student attitudes, better retention, etc. However, when the rush wears off and the new courses have to be institutionalized, what happens? What can be learned from consistent, long-term efforts to assess and improve the sophomore engineering science courses? This paper focuses on the introductory sophomore materials science course, Principles of Materials Engineering (ENGR 213). Using data collected from students and evaluation of student performance as measured by course grades and a standardized test, the authors will examine what has been learned since the inception of the course.


Author(s):  
Tiffany Williams

The main objective of this study was to understand how teacher leadership and teacher quality impacted fourth grade student performance on the Louisiana Education Assessment Program (LEAP). The participants chosen were six schools in Louisiana who were labeled academically unacceptable for at least three years that taught fourth grade. A review of the data results provide an indication that, although standardized test scores of students are one piece of information for school leaders to use to make judgments about teacher effectiveness, such scores should be only a part of an overall comprehensive evaluation of the role of teacher leadership and teacher quality on student performance. The results varied by the characteristic of teacher quality and its impact on student performance.


2004 ◽  
Vol 13 (2) ◽  
pp. 182-190 ◽  
Author(s):  
Shurita Thomas-Tate ◽  
Julie Washington ◽  
Jan Edwards

Accurate identification of students with poor phonological awareness skills is important to providing appropriate reading instruction. This is particularly true for segments of the population, such as African American students, who have a history of reading failure. The purpose of this study was to examine the performance of a group of African American first-grade students from low-income families on a standardized test of phonological awareness. Fifty-six African American first graders were given the Test of Phonological Awareness (TOPA; J. K. Torgesen & B. R. Bryant, 1994). Mean student performance on the TOPA was significantly below expected norms and negatively skewed. However, students' mean performance on a test of basic reading skills indicated performance within normal limits. Outcomes are discussed relative to the validity and predictive power of standardized phonological assessment instruments, in this case, the TOPA, for use with African American students and the possible influence of dialect on performance.


Author(s):  
Arsaythamby Veloo ◽  
Ruzlan Md-Ali ◽  
Rozalina Khalid

Changes in the education system will invariably alter the modes of assessment and practices moving forward. This will demand high expectations among stakeholders who are directly involved with the accountability of assessment administration. Presently, professional education organizations have codes of conduct, principles and standards for administration assessment that outline certain responsibilities to ensure that the inherent accountability of the assessment administration system is maintained and continually improved. Accordingly, it is important that assessment administration practices are aligned with the institution’s assessment policies. Similarly, assessment administrators should collaborate with institutions to develop and unify assessment standards and practices and to pay particular attention to the accountability of assessment administration, which includes maintaining assessment security and integrity. Assessment practices are expected to be fair, equitable, and unbiased when measuring students’ performance, which is heavily reliant on the accountability of assessment administration. Assessment practices previously have been focused more on the cognitive aspects involved in paper and pencil tests based on a standardized test. Thus, not many issues concerning assessment administration have been discussed. However, there is a need to accommodate and modify assessment administration according to the needs of current assessment modes and practices, where most countries have now adopted school-based assessment. The accountability of teachers towards the student’s assessment becomes even more important within the school-based assessment system. Hence, the teachers are accountable for students’ performance in the classroom environment rests with teachers. Therefore, to overcome and address many of the challenges associated with administration assessment as we move towards the future; close attention must be paid to the accountability of how the process around the administration of assessments is administered. Assessment administrators are accountable and expected to display honesty, integrity, due care, validity, and reliability, and to ensure that fairness is observed and maintained during assessment. The assessment process can impact the teacher’s orchestration and design of assessment administration practices and in addressing the issues of fairness in the eyes of stakeholders when determining student performance. Assessment administration involves processes that need to be well planned, implemented, and continuously monitored. Likewise, there are standardized, documented rules and procedures that assessment administrators need to follow to ensure that accountability is maintained.


Sign in / Sign up

Export Citation Format

Share Document