Implementing Teaching Portfolios and Peer Reviews in Tax Courses

1999 ◽  
Vol 21 (2) ◽  
pp. 95-107 ◽  
Author(s):  
Michael J. Calegari ◽  
Gregory G. Geisler ◽  
Ernest R. Larkins

Extant literature suggests that the process of constructing a teaching portfolio can identify areas to improve, motivate positive changes, and elevate the importance of teaching in academe. This study describes the experience of the tax faculty at a public university in using teaching portfolios and peer reviews to improve the quality of the first two tax courses. The type of teaching portfolio used in this project consists of a course syllabus and a reflective statement that documents the rationale for all components of a course (i.e., lectures, projects, exams, writing assignments, presentations, etc.). The peer review aspect involves written feedback from a colleague on this teaching portfolio. Though research publications are usually subject to extensive peer review, teaching generally is not. Like research, however, teaching can be evaluated and ultimately improved through peer review. Thus, this study can provide valuable guidance to tax professors attempting to improve their courses.

2010 ◽  
Vol 96 (1) ◽  
pp. 20-29
Author(s):  
Jerry C. Calvanese

ABSTRACT Study Objective: The purpose of this study was to obtain data on various characteristics of peer reviews. These reviews were performed for the Nevada State Board of Medical Examiners (NSBME) to assess physician licensees' negligence and/or incompetence. It was hoped that this data could help identify and define certain characteristics of peer reviews. Methods: This study examined two years of data collected on peer reviews. The complaints were initially screened by a medical reviewer and/or a committee composed of Board members to assess the need for a peer review. Data was then collected from the peer reviews performed. The data included costs, specialty of the peer reviewer, location of the peer reviewer, and timeliness of the peer reviews. Results: During the two-year study, 102 peer reviews were evaluated. Sixty-nine percent of the peer-reviewed complaints originated from civil malpractice cases and 15% originated from complaints made by patients. Eighty percent of the complaint physicians were located in Clark County and 12% were located in Washoe County. Sixty-one percent of the physicians who performed the peer reviews were located in Washoe County and 24% were located in Clark County. Twelve percent of the complaint physicians were in practice in the state for 5 years or less, 40% from 6 to 10 years, 20% from 11 to 15 years, 16% from 16 to 20 years, and 13% were in practice 21 years or more. Forty-seven percent of the complaint physicians had three or less total complaints filed with the Board, 10% had four to six complaints, 17% had 7 to 10 complaints, and 26% had 11 or more complaints. The overall quality of peer reviews was judged to be good or excellent in 96% of the reviews. A finding of malpractice was found in 42% of the reviews ordered by the medical reviewer and in 15% ordered by the Investigative Committees. There was a finding of malpractice in 38% of the overall total of peer reviews. The total average cost of a peer review was $791. In 47% of the peer reviews requested, materials were sent from the Board to the peer reviewer within 60 days of the original request and 33% took more than 120 days for the request to be sent. In 48% of the reviews, the total time for the peer review to be performed by the peer reviewer was less than 60 days. Twenty seven percent of the peer reviews took more than 120 days to be returned. Conclusion: Further data is needed to draw meaningful conclusions from certain peer review characteristics reported in this study. However, useful data was obtained regarding timeliness in sending out peer review materials, total times for the peer reviews, and costs.


Publications ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 13 ◽  
Author(s):  
Afshin Sadeghi ◽  
Sarven Capadisli ◽  
Johannes Wilm ◽  
Christoph Lange ◽  
Philipp Mayr

An increasing number of scientific publications are created in open and transparent peer review models: a submission is published first, and then reviewers are invited, or a submission is reviewed in a closed environment but then these reviews are published with the final article, or combinations of these. Reasons for open peer review include giving better credit to reviewers, and enabling readers to better appraise the quality of a publication. In most cases, the full, unstructured text of an open review is published next to the full, unstructured text of the article reviewed. This approach prevents human readers from getting a quick impression of the quality of parts of an article, and it does not easily support secondary exploitation, e.g., for scientometrics on reviews. While document formats have been proposed for publishing structured articles including reviews, integrated tool support for entire open peer review workflows resulting in such documents is still scarce. We present AR-Annotator, the Automatic Article and Review Annotator which employs a semantic information model of an article and its reviews, using semantic markup and unique identifiers for all entities of interest. The fine-grained article structure is not only exposed to authors and reviewers but also preserved in the published version. We publish articles and their reviews in a Linked Data representation and thus maximise their reusability by third party applications. We demonstrate this reusability by running quality-related queries against the structured representation of articles and their reviews.


2015 ◽  
Vol 7 (2) ◽  
pp. 292-307
Author(s):  
Christina A. Geithner ◽  
Alexandria N. Pollastro

Purpose – The purpose of this paper is to incorporate a blended pedagogical approach to Scientific Writing, and assess its effectiveness in improving students’ writing skills and scientific literacy. Effective writing is vital to the dissemination of scientific information and a critical skill for undergraduate science students. Various pedagogical strategies have been successful in improving writing skills and developing scientific literacy. Design/methodology/approach – Mean scores on draft and revision assignments were examined longitudinally (2013 cohort, n=51) and across cohorts (2011, 2012, and 2013; combined n=94). Domain-specific composite scores were calculated from survey items addressing students’ self-perceptions of knowledge (K), general and scientific writing skills (GWS and SWS), and attitudes (A) related to scientific literacy. Changes in composite scores were analyzed using paired t-tests, and cross-cohort differences were examined via MANOVAs (SPSS, p < 0.05). Findings – Mean scores on revisions following peer review and instructor feedback were significantly higher than those for drafts. Students ' perceptions of their K, GWS, SWS, and A increased significantly over the semester in the 2013 cohort, and were significantly higher in the 2013 cohort than those for the two earlier cohorts. Students identified peer reviews, revisions and other writing assignments, and literature searches as effective learning strategies. Research limitations/implications – One limitation of the study was that the authors lacked a control group for comparison. Pre-course survey data were only available for the 2013 cohort, and these data were incomplete, particularly with regard to perceptions of attitudes toward science and writing. Instructor feedback was not separated from that obtained through peer review. Thus, it was not possible to determine their respective impacts on students’ scores on revision assignments. Also, the number of writing assignments and peer reviews completed varied among the three cohorts enrolled in Scientific Writing. Practical implications – Using a blended approach to teaching scientific writing significantly improved students’ writing skills and enhanced their perceptions regarding their knowledge, skills, and abilities related to science and writing. Students identified peer reviews, writing abstracts, and outlining an Introduction as most helpful in improving their SWS. They identified the final peer review, the revision assignment of the Results section, literature searches, and poster presentations of research as most helpful in improving their scientific knowledge and understanding. Engaging students in a variety of pedagogical strategies was successful in achieving specific learning outcomes in an undergraduate human physiology course. Originality/value – The approach to peer review was more structured than those of previous studies. Engaging students with a variety of teaching and learning strategies improved both writing skills and scientific literacy in undergraduate human physiology.


Author(s):  
V.  N. Gureyev ◽  
N.  A. Mazov

The paper summarizes experience of the authors as peer-reviewers of more than 100 manuscripts in twelve Russian and foreign academic journals on Library and Information Science in the last seven years. Prepared peer-reviews were used for making a list of the most usual critical and special comments for each manuscript that were subsequently structured for the conducted analyzes. Typical issues accompanying the peer-review process are shown. Significant differences between the results of peer-review in Russian and foreign journals are detected: although the initial quality of newly submitted manuscripts is approximately equal, the final published versions in foreign journals addressed all critical and the majority of minor reviewers’ comments, while in Russian journals more than one third of final versions were published with critical gaps. We conclude about low interest in high quality peer reviews among both authors and editors-in-chief in Russian journals. Despite the limitations of the samples, the obtained findings can be useful when evaluating the current peer-review system in Russian academic journals on Library and Information Science.


2016 ◽  
Vol 113 (30) ◽  
pp. 8414-8419 ◽  
Author(s):  
Stefano Balietti ◽  
Robert L. Goldstone ◽  
Dirk Helbing

To investigate the effect of competitive incentives under peer review, we designed a novel experimental setup called the Art Exhibition Game. We present experimental evidence of how competition introduces both positive and negative effects when creative artifacts are evaluated and selected by peer review. Competition proved to be a double-edged sword: on the one hand, it fosters innovation and product diversity, but on the other hand, it also leads to more unfair reviews and to a lower level of agreement between reviewers. Moreover, an external validation of the quality of peer reviews during the laboratory experiment, based on 23,627 online evaluations on Amazon Mechanical Turk, shows that competition does not significantly increase the level of creativity. Furthermore, the higher rejection rate under competitive conditions does not improve the average quality of published contributions, because more high-quality work is also rejected. Overall, our results could explain why many ground-breaking studies in science end up in lower-tier journals. Differences and similarities between the Art Exhibition Game and scholarly peer review are discussed and the implications for the design of new incentive systems for scientists are explained.


2020 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Tine Köhler ◽  
M. Gloria González-Morales ◽  
George C. Banks ◽  
Ernest H. O’Boyle ◽  
Joseph A. Allen ◽  
...  

AbstractPeer review is a critical component toward facilitating a robust science in industrial and organizational (I-O) psychology. Peer review exists beyond academic publishing in organizations, university departments, grant agencies, classrooms, and many more work contexts. Reviewers are responsible for judging the quality of research conducted and submitted for evaluation. Furthermore, they are responsible for treating authors and their work with respect, in a supportive and developmental manner. Given its central role in our profession, it is curious that we do not have formalized review guidelines or standards and that most of us never receive formal training in peer reviewing. To support this endeavor, we are proposing a competency framework for peer review. The purpose of the competency framework is to provide a definition of excellent peer reviewing and guidelines to reviewers for which types of behaviors will lead to good peer reviews. By defining these competencies, we create clarity around expectations for peer review, standards for good peer reviews, and opportunities for training the behaviors required to deliver good peer reviews. We further discuss how the competency framework can be used to improve peer reviewing and suggest additional steps forward that involve suggestions for how stakeholders can get involved in fostering high-quality peer reviewing.


1999 ◽  
Vol 62 (3) ◽  
pp. 87-94 ◽  
Author(s):  
Laura MacLeod

Classes that require writing assignments commonly engage students in some form of peer review. Such reviews can profit from two computer applications: news groups and specialized communication client software. In a recent survey, a majority of the students who conducted peer reviews in a business writing class found the activities useful and worthy of repeating in future semesters. But they suggested that faculty provide more guidelines and topics to enhance the substance of the review, train students more directly in the computer applications, and allot more time to conduct the reviews at a comfortable pace.


1988 ◽  
Vol 62 (1) ◽  
pp. 337-338 ◽  
Author(s):  
Janet L. Kottke

In this study of whether feedback from students improved peers' writing 56 students in a medium-size class of introductory psychology gave written feedback to their peers who then revised their own essays for final grading. Students were more critical evaluators than the teaching assistant. Although quality of essays improved upon revision, there were some mitigating issues. The continuing need to develop writing skills is stimulus for additional research on the use of peer review.


2015 ◽  
Vol 96 (2) ◽  
pp. 191-201 ◽  
Author(s):  
Matthew S. Mayernik ◽  
Sarah Callaghan ◽  
Roland Leigh ◽  
Jonathan Tedds ◽  
Steven Worley

Abstract Peer review holds a central place within the scientific communication system. Traditionally, research quality has been assessed by peer review of journal articles, conference proceedings, and books. There is strong support for the peer review process within the academic community, with scholars contributing peer reviews with little formal reward. Reviewing is seen as a contribution to the community as well as an opportunity to polish and refine understanding of the cutting edge of research. This paper discusses the applicability of the peer review process for assessing and ensuring the quality of datasets. Establishing the quality of datasets is a multifaceted task that encompasses many automated and manual processes. Adding research data into the publication and peer review queues will increase the stress on the scientific publishing system, but if done with forethought will also increase the trustworthiness and value of individual datasets, strengthen the findings based on cited datasets, and increase the transparency and traceability of data and publications. This paper discusses issues related to data peer review—in particular, the peer review processes, needs, and challenges related to the following scenarios: 1) data analyzed in traditional scientific articles, 2) data articles published in traditional scientific journals, 3) data submitted to open access data repositories, and 4) datasets published via articles in data journals.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Annette Burgess ◽  
Chris Roberts ◽  
Andrew Stuart Lane ◽  
Inam Haq ◽  
Tyler Clark ◽  
...  

Abstract Background Peer review in Team-based learning (TBL) exists for three key reasons: to promote reflection on individual behaviours; provide opportunities to develop professional skills; and prevent ‘free riders’ who fail to contribute effectively to team discussions. A well-developed process that engages students is needed. However, evidence suggests it remains a difficult task to effectively incorporate into TBL. The purpose of this study was to assess medical students’ ability to provide written feedback to their peers in TBL, and to explore students’ perception of the process, using the conceptual framework of Biggs ‘3P model’. Methods Year 2 students (n = 255) participated in peer review twice during 2019. We evaluated the quality of feedback using a theoretically derived rubric, and undertook a qualitative analysis of focus group data to seek explanations for feedback behaviors. Results Students demonstrated reasonable ability to provide positive feedback, but were less prepared to identify areas for improvement. Their ability did not improve over time, and was influenced by the perceived task difficulty; social discomfort; and sense of responsibility in providing written feedback. Conclusions To increase student engagement, we require a transparent process that incorporates verbal feedback and team discussion, with monitoring of outcomes by faculty and adequate training.


Sign in / Sign up

Export Citation Format

Share Document