Panther Peer: A Web-Based Tool for Peer and Self Evaluation

Author(s):  
Venkatesh Venkataramanujam ◽  
Pierre Larochelle

Panther Peer is a novel web based tool for peer evaluation. It has been developed at the Florida Institute of Technology to enable students (specifically those involved in capstone design projects) to give one another anonymous feedback on their team performance. Panther Peer is simple to implement and completely automated. Panther Peer automates the process of peer evaluation and minimizes the workload for both instructors and students. With the benefits of automation students can gain feedback more quickly. Moreover, the reduction in workload for course instructors enables them to encourage peer evaluations. The primary advantage of this system is the feedback students receive from their peers which helps them identify their weaknesses and focus on their strengths. The automated process means that the collection and dissemination of information is highly efficient. From the peer evaluations by students, instructors can have a fair idea about the teams progress and intervene where deemed necessary.

2011 ◽  
Vol 4 (5) ◽  
pp. 21 ◽  
Author(s):  
John Kevin Doyle ◽  
Ralph D. Meeker

The authors assign semester- or quarter-long team-based projects in several Computer Science and Finance courses. This paper reports on our experience in designing, managing, and evaluating such projects. In particular, we discuss the effects of team size and of various peer evaluation schemes on team performance and student learning. We report statistical measures of the students peer evaluations do they always rate each other strongly or weakly? What are the means and ranges? More importantly, we discuss why we introduced these peer evaluations, and what effect they have had on student commitment and performance. We discuss a small number of cases where student participation was poor, and relate this to the peer evaluation process.


2019 ◽  
Vol 25 (5/6) ◽  
pp. 334-347
Author(s):  
Ernesto Tavoletti ◽  
Robert D. Stephens ◽  
Longzhu Dong

Purpose This study aims to assess the effect of peer evaluations on team-level effort, productivity, motivation and overall team performance. Design/methodology/approach This study explores the impact of a peer evaluation system on 895 multicultural and transnational global virtual teams (GVTs) composed of 5,852 university students from 130 different countries. The study uses a quasi-experiment in which the group project is implemented under two conditions over two sequential iterations. In the first condition, team members do not receive peer evaluation feedback during the project. In the second condition, participants completed detailed peer evaluations of their team members and received feedback weekly for eight consecutive weeks. Findings Results suggest that when peer evaluations are used in GVTs during the project, teams show: higher levels of group effort; lower levels of average productivity and motivation; and no clear evidence of improved team performance. Results cast doubts on the benefits of peer evaluation within GVTs as the practice fails to reach its main objective of improving team performance and generates some negative internal dynamics. Practical implications The major implication of the study for managers and educators using GVTs is that the use of peer evaluations during the course of a project does not appear to improve objective team performance and reduces team motivation and perception of productivity despite increases in teams’ perceptions of effort and performance. Originality/value This study contributes to the scanty literature regarding the impact of peer evaluation systems on group-level dynamics and performance outcomes.


Author(s):  
Mahmoud Dinar ◽  
Yong-Seok Park ◽  
Jami J. Shah

Conventional syllabi of engineering design courses either do not pay enough attention to conceptual design skills, or they lack an objective assessment of those skills to show students’ progress. During a semester-long course of advanced engineering product design, we assigned three major design projects to twenty five students. For each project we asked them to formulate the problems in the Problem Formulator web-based testbed. In addition, we collected sketches for all three design problems, feasibility analyses for the last two, and a working prototype for the final project. We report the students’ problem formulation and ideation in terms of a set of nine problem formulation characteristics and ASU’s ideation effectiveness metrics respectively. We discuss the limitations that the choice of the design problems caused, and how the progress of a class of students during a semester-long design course resulted in a convergence in sets of metrics that we have defined to characterize problem formulation and ideation. We also review the results of students of a similar course which we reported last year in order to find common trends.


Author(s):  
Manal Abdulrahman Al-Mandharia, Mohammed Nassir Al-Riyami

The aim of this study was to investigate the degree of mathematics teachers’ practice of authentic evaluation strategies and tools in the basic education stage in the Sultanate of Oman. The researcher prepared a questionnaire to measure the degree of use of the authentic evaluation strategies and tools. The sample consisted of (266) teachers where (211) teachers from the first cycle and (55) teachers from the second cycle of basic education schools in the province of Muscat. After statistical processing using averages, frequencies and tests, the results of the study showed that the teachers’ use of authentic evaluation strategies and tools in both the first and second cycles in the basic education schools was high. The results showed that the strategies of self-evaluation and peer evaluation are the most widely used by the teachers. The strategy of evaluating the performance by the concept's maps obtained the least degree of use although it has a high level. The results also showed that there are statistically significant differences in the degree of practice the authentic evaluation strategies and these differences are in favor of the teachers who have an experience of more than ten years. The results showed no statistically significant differences among the teachers of both the first and second cycles in the practice of authentic evaluation strategies and tools. Consequently, the researcher recommended directing the institutions that are responsible for the preparation of new teachers to add training programs on authentic evaluation strategies and tools. The researcher also recommended conducting studies on the difficulties faced by teachers on the practice of all authentic evaluation strategies and tools in a balanced manner.


Author(s):  
Chiu Man Yu ◽  
Denis Gillet ◽  
Sandy El Helou ◽  
Christophe Salzmann

In the framework of the PALETTE European research project, the Swiss federal Institute of Technology in Lausanne (EPFL) is designing and experimenting with eLogbook, a Web-based collaborative environment designed for communities of practice. It enables users to manage joint activities, share related assets and get contextual awareness. In addition to the original Web-based access, an email-based eLogbook interface is developed. The purpose of this lightweight interface is twofold. First, it eases eLogbook access when using smart phones or PDA. Second, it eases eLogbook acceptance for community members hesitating to learn an additional Web environment. Thanks to the proposed interface, members of a community can benefit from the ease of use of an email client combined with the power of an activity and asset management system without burden. The Web-based eLogbook access can be kept for supporting further community evolutions, when participation becomes more regular and activities become more complex. This chapter presents the motivation, the design and the incentives of the emailbased eLogbook interface.


Author(s):  
Martin E. Bollo

Professional registration (P.Eng.) applicants in B.C. must use the Engineers & Geoscientists BC web-based Competency Experience Reporting System (CERS) to have their work experience evaluated. CERS measures competencies – measures of the ability to perform the tasks and roles of an occupational category to standards expected and recognized by employers and the community at large – in seven competency categories, each of which can be related to the twelve CEAB graduate attributes.As part of a university-level course in engineering professionalism, students were given an assignment to use CERS to conduct a self-evaluation and make recommendations for their own future professional development.To measure the perceived effectiveness of the assignment, students completed three identical questionnaires: one before the topic was introduced, one after a guest speaker presentation on the topic, and one after submitting the assignment. The questionnaire measured each student’s degree of knowledge or understanding of ten different aspects of professional registration and professional development. The results indicated a progressive increase in agreement between the first, second and third questionnaire for all ten questions, with the greatest increases relating to registration procedures and students’ identification of shortcomings of their own experience.Usage of the competency assessment system by regulators is being expanded in Canada, which potentially provides the opportunity to conduct similar student assignments within other engineering programs.


Author(s):  
El-Sayed S. Aziz ◽  
Constantin Chassapis ◽  
Sven K. Esche

Student laboratories have always played a key role in the engineering education at Stevens Institute of Technology (SIT). Recently, SIT has designed and implemented several innovative Web-based tools for engineering laboratory education and evaluated their learning effectiveness in pilot deployments in various engineering courses. These Web-based tools include both remotely operated experiments based on actual experimental devices as well as virtual experiments representing software simulations. These tools facilitate the development of learning environments, which - possibly in conjunction with traditional hands-on experiments - allow the expansion of the scope of the students' laboratory experience well beyond the confines of what would be feasible in the context of traditional laboratories. This becomes possible because of the scalability of resources that are shared through the Web and the flexibility of software simulations in varying the characteristic parameters of the experimental system under investigation. Further educational benefits of the proposed laboratory approach are that asynchronous learning modes are supported and discovery-based self-learning, of the students is promoted. This paper will present the details of the approach taken at SIT in integrating these Web-based tools into a comprehensive student laboratory experience. As an example for the implementation of such Web-based experiments, an Industrial-Emulator/Servo-Trainer System will be described, which is used at SIT in a junior-level course on mechanisms and machine dynamics.


2017 ◽  
Vol 34 (05) ◽  
pp. 1750027 ◽  
Author(s):  
Qing Wang ◽  
Zhaojun Liu ◽  
Yang Zhang

In the traditional DEA model, each DMU maximizes its efficiency with the most favorable weights. This leads to flexibility and unreality of input and output weights. Subsequently, it is unfair to compare and rank the efficiencies of different DMUs obtained on the basis of these weights. In this paper, we propose a novel approach to determine a common set of weights with more consensus to evaluate and rank the performance of all DMUs by weighting the rescaled weights based on the degree of consensus, where the weights obtained from DEA are rescaled for comparison among DMUs. Moreover, to overcome the non-uniqueness of the weights, a novel secondary goal is developed based on the agreement between self-evaluation and peer-evaluation. In addition, the restriction of weights is taken into account to avoid trivial weights. Finally, an example of 14 international passenger airlines is used to illustrate the performance and credibility of our proposed method.


2007 ◽  
Vol 129 (7) ◽  
pp. 692-700 ◽  
Author(s):  
M. Keefe ◽  
J. Glancey ◽  
N. Cloud

Although cooperative learning in a team setting is a common approach for integrating problem-based learning into undergraduate science and engineering, standard assessment tools do not exists to evaluate learning outcomes. As a result, novel techniques need to be developed to assess learning in team-based design projects. This paper describes the experiences and lessons learned in assessing student performance in team-based, project courses culminating in a senior capstone experience that integrates industry-sponsored design projects. A set of rubrics linked to the instructional objectives was developed that define and communicate expectations during each of three project phases. Rubrics for each phase incorporate three fundamental areas of team performance assessment: (i) synthesis of a valid concept; (ii) management of resources; and (iii) interpersonal interaction and communication. At the end of each phase, both the faculty and industry sponsor use the same rubric to assess student team performance. An analysis of variance (ANOVA) of the assessment data collected over the last 5 years indicated that student performance, measured by faculty grades and industry sponsor evaluations, was not significantly affected by the faculty advisor, project type, or sponsoring company size. These results are attributed primarily to the faculty focusing more on assessing student performance in executing the design process and less on the actual project results. The analysis also revealed that faculty assessments of student performance did not correlate very well with industry sponsor assessments. To address this, a revised set of evaluation rubrics were developed and are currently being used to better articulate expectations from both faculty and industrial sponsor perspectives.


Sign in / Sign up

Export Citation Format

Share Document