scholarly journals Development of aLacOperon Concept Inventory (LOCI)

2016 ◽  
Vol 15 (2) ◽  
pp. ar24 ◽  
Author(s):  
Katherine M. Stefanski ◽  
Grant E. Gardner ◽  
Rebecca L. Seipelt-Thiemann

Concept inventories (CIs) are valuable tools for educators that assess student achievement and identify misconceptions held by students. Results of student responses can be used to adjust or develop new instructional methods for a given topic. The regulation of gene expression in both prokaryotes and eukaryotes is an important concept in genetics and one that is particularly challenging for undergraduate students. As part of a larger study examining instructional methods related to gene regulation, the authors developed a 12-item CI assessing student knowledge of the lac operon. Using an established protocol, the authors wrote open-ended questions and conducted in-class testing with undergraduate microbiology and genetics students to discover common errors made by students about the lac operon and to determine aspects of item validity. Using these results, we constructed a 12-item multiple-choice lac operon CI called the Lac Operon Concept Inventory (LOCI), The LOCI was reviewed by two experts in the field for content validity. The LOCI underwent item analysis and was assessed for reliability with a sample of undergraduate genetics students (n = 115). The data obtained were found to be valid and reliable (coefficient alpha = 0.994) with adequate discriminatory power and item difficulty.

2017 ◽  
Vol 16 (2) ◽  
pp. ar35 ◽  
Author(s):  
Jenny L. McFarland ◽  
Rebecca M. Price ◽  
Mary Pat Wenderoth ◽  
Patrícia Martinková ◽  
William Cliff ◽  
...  

We present the Homeostasis Concept Inventory (HCI), a 20-item multiple-choice instrument that assesses how well undergraduates understand this critical physiological concept. We used an iterative process to develop a set of questions based on elements in the Homeostasis Concept Framework. This process involved faculty experts and undergraduate students from associate’s colleges, primarily undergraduate institutions, regional and research-intensive universities, and professional schools. Statistical results provided strong evidence for the validity and reliability of the HCI. We found that graduate students performed better than undergraduates, biology majors performed better than nonmajors, and students performed better after receiving instruction about homeostasis. We used differential item analysis to assess whether students from different genders, races/ethnicities, and English language status performed differently on individual items of the HCI. We found no evidence of differential item functioning, suggesting that the items do not incorporate cultural or gender biases that would impact students’ performance on the test. Instructors can use the HCI to guide their teaching and student learning of homeostasis, a core concept of physiology.


2020 ◽  
Vol 16 (2) ◽  
pp. 138-148
Author(s):  
Tiina Kiviniemi ◽  
Piia Nuora

A chemistry concept inventory (Chemical Concept Inventory 3.0/CCI 3.0), previously developed for use in Norwegian universities, was tested and evaluated for use in a Finnish university setting. The test, designed to evaluate student knowledge and learning of chemistry concepts, was administered as both pre- and posttest in first year general chemistry courses at the University of Jyväskylä. The results were evaluated using different statistical tests, focusing both on individual item analysis and the entire test. Some individual questions were found to be not discriminating or reliable enough or too difficult, yet the results, as a whole, indicate that the concept inventory is a reliable and discriminating tool that can be used in the Finnish university context.


2017 ◽  
Vol 14 (2) ◽  
pp. 2021
Author(s):  
Emrah Oğuzhan Dinçer ◽  
Derya Çobanoğlu Aktan

The aim of this study is to adapt “Star Properties Concept Inventory-SPCI” developed by Bailey, Johnson, Prather, and Slater (2012) into Turkish, and to make validity and reliability analyses of the inventory. The original inventory is consisted of 22 items. The study is conducted with the participation of 386 students from three different universities. Preservice science teachers and fourth grade astronomy students are formed this research group.  Cronbach’s alpha coefficient is determined as 0.82 for the inventory. As a result of item analysis of data gained from application of inventory; item discrimination indexes are found between 0.07 and 0.73, item difficulty indexes are found between 0.13 and 0.75.  While the average of test is found as 7.27, the average difficulty is 0.33. The content validity is determined by taking expert opinions.Extended English abstract is in the end of PDF (TURKISH) file.  ÖzetBu çalışmanın amacı, Bailey, Johnson, Prather ve Slater (2012) tarafından geliştirilen “Star Properties Concept Inventory-SPCI” Yıldız Özellikleri Kavram Envanterini (YÖKE) Türkçeye uyarlamak ve Türkçe uyarlamanın güvenirlik ve geçerlik çalışmasını yapmaktır. Envanterin orijinali 22 sorudan oluşmaktadır. Araştırma üç farklı üniversiteden toplam 386 öğrencinin katılımı ile gerçekleştirilmiştir. Araştırma örneklemi, astronomi bölümü dördüncü sınıf öğrencileri ile fen ve teknoloji öğretmen adaylarından oluşmaktadır.  Cronbach alfa katsayısı ölçme aracı için 0.82 olarak belirlenmiştir. Kavram testinin uygulanması sonucu elde edilen verilerin madde analizi sonucunda maddelerin ayırıcılık güçlerinin 0.07 ile 0.73 arasında, madde güçlük indislerinin de 0.13 ile 0.75 arasında olduğu bulunmuştur. Testin ortalaması 7.27 bulunurken, testin ortalama güçlüğü de 0.33’tür. Testin kapsam geçerliği uzman görüşleri alınarak belirlenmiştir.


2013 ◽  
Vol 12 (4) ◽  
pp. 655-664 ◽  
Author(s):  
Pamela Kalas ◽  
Angie O’Neill ◽  
Carol Pollock ◽  
Gülnur Birol

We have designed, developed, and validated a 17-question Meiosis Concept Inventory (Meiosis CI) to diagnose student misconceptions on meiosis, which is a fundamental concept in genetics. We targeted large introductory biology and genetics courses and used published methodology for question development, which included the validation of questions by student interviews (n = 28), in-class testing of the questions by students (n = 193), and expert (n = 8) consensus on the correct answers. Our item analysis showed that the questions’ difficulty and discrimination indices were in agreement with published recommended standards and discriminated effectively between high- and low-scoring students. We foresee other institutions using the Meiosis CI as both a diagnostic tool and an instrument to assess teaching effectiveness and student progress, and invite instructors to visit http://q4b.biology.ubc.ca for more information.


1995 ◽  
Vol 268 (6) ◽  
pp. S21 ◽  
Author(s):  
P K Rangachari ◽  
S Mierson

Because critical analysis of published information is an essential component of scientific life, it is important that students be trained in its practice. Undergraduate students who are more accustomed to reading textbooks and taking lecture notes find it difficult to appreciate primary publications. To help such students, we have developed a checklist that helps them analyze different components of a research article in basic biomedical sciences. Students used the checklist to analyze critically a published article. The students were assigned an article and asked to write a paper (maximum 2 pages of single-spaced type) assessing it. This assignment has been found useful to both undergraduate and graduate students in pharmacology and physiology. Student responses to a questionnaire were highly favorable; students thought the exercise provided them with some of the essential skills for life-long learning.


2020 ◽  
Vol 34 (1) ◽  
pp. 52-67 ◽  
Author(s):  
Igor Himelfarb ◽  
Margaret A. Seron ◽  
John K. Hyland ◽  
Andrew R. Gow ◽  
Nai-En Tang ◽  
...  

Objective: This article introduces changes made to the diagnostic imaging (DIM) domain of the Part IV of the National Board of Chiropractic Examiners examination and evaluates the effects of these changes in terms of item functioning and examinee performance. Methods: To evaluate item function, classical test theory and item response theory (IRT) methods were employed. Classical statistics were used for the assessment of item difficulty and the relation to the total test score. Item difficulties along with item discrimination were calculated using IRT. We also studied the decision accuracy of the redesigned DIM domain. Results: The diagnostic item analysis revealed similarity in item function across test forms and across administrations. The IRT models found a reasonable fit to the data. The averages of the IRT parameters were similar across test forms and across administrations. The classification of test takers into ability (theta) categories was consistent across groups (both norming and all examinees), across all test forms, and across administrations. Conclusion: This research signifies a first step in the evaluation of the transition to digital DIM high-stakes assessments. We hope that this study will spur further research into evaluations of the ability to interpret radiographic images. In addition, we hope that the results prove to be useful for chiropractic faculty, chiropractic students, and the users of Part IV scores.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-28
Author(s):  
R. Paul Wiegand ◽  
Anthony Bucci ◽  
Amruth N. Kumar ◽  
Jennifer Albert ◽  
Alessio Gaspar

In this article, we leverage ideas from the theory of coevolutionary computation to analyze interactions of students with problems. We introduce the idea of informatively easy or hard concepts. Our approach is different from more traditional analyses of problem difficulty such as item analysis in the sense that we consider Pareto dominance relationships within the multidimensional structure of student–problem performance data rather than average performance measures. This method allows us to uncover not just the problems on which students are struggling but also the variety of difficulties different students face. Our approach is to apply methods from the Dimension Extraction Coevolutionary Algorithm to analyze problem-solving logs of students generated when they use an online software tutoring suite for introductory computer programming called problets . The results of our analysis not only have implications for how to scale up and improve adaptive tutoring software but also have the promise of contributing to the identification of common misconceptions held by students and thus, eventually, to the construction of a concept inventory for introductory programming.


2020 ◽  
Vol 1 (2) ◽  
pp. 122-138
Author(s):  
Husnani Aliah

The research aimed at finding out information about the preparation of constructing teacher-made tests in Enrekang, the quality of English teacher-made test according to item analysis, and the level cognitive domain of the teacher-made test. The test quality was determined after it was used in school examination test. This research employed survey research using descriptive method. The researcher analyzed the data and then described the research finding quantitatively. The population of this research was the teachers who teach in ninth grade at junior high schools in Enrekang. This research applied simple random sampling technique by taking four different schools as sampel. The results of analysis show preparation that junior high school teachers follow in constructing teacher-made tests in Enrekang is divided into five main parts. In preparing the test, the procedures were considering tests’ materials and proportion of each topic, choosing to check the item bank that match to syllabus and indicators, or preparing test specification. In writing test, teachers’ procedures were re-writing chosen test item from internet and textbook, re-writing items that was used before and allowing the other teachers to verify it, combining items from item bank and text book, or making new item. While in analyzing a test, the procedures used by the teachers were analyzing and revising test based on its item difficulty, predicting the item difficulty and revising the test, or doing nothing to analyze the test. About the timing in preparing the test, there are three out of five teachers who need only one week to construct multiple choice tests. Besides, there are two out of five teachers who need two weeks to construct multiple choice tests. While the teachers have different ways in providing test based on students’ ability. Moreover, the item analysis shows that no test is perfectly good. It was found that almost all tests need to be revised. It was also found that there were only three categories works in all tests based on the cognitive domain of the test namely knowledge, comprehension, and application categories. There was no item belong to analysis, synthesis, and evaluation categories.


2021 ◽  
Vol 1 (1) ◽  
pp. 49-54
Author(s):  
Musliadi Musliadi ◽  
Reski Yusrini Islamiah Yunus ◽  
Muhammad Affan Ramadhana

This study investigates students' perception of the use of YouTube to facilitate undergraduate students' speaking activities. The method used in this research is descriptive quantitative research. The sampling system is done randomly and takes 40 students as a sample. The questionnaire has two parts, followed by ten questions with five answer choices using a Likert scale covering strongly disagree to strongly agree. The result of the study shows that 80% of students access YouTube because YouTube is very interesting, 75% of students say YouTube is an easy media to access, 80% of students say YouTube can be used as a learning resource (80%), and 85% of students use YouTube as a medium for doing speaking tasks. The student response to the use of YouTube as the media of facilitating students' speaking tasks is very positive, where 72% of students stated they strongly agreed if the practice of speaking through YouTube was applied, and 20% of students agreed. In general, student responses in using YouTube to facilitate students speaking activities in distance learning during the Covid-19 pandemic are very positive.


2020 ◽  
Vol 2 (1) ◽  
pp. 34-46
Author(s):  
Siti Fatimah ◽  
Achmad Bernhardo Elzamzami ◽  
Joko Slamet

This research was conducted by focusing on the formulated question regarding the test scores validity, reliability and item analysis involving the discrimination power and index difficulty in order to provide detail information leading to the improvement of test items construction. The quality of each particular item was analyzed in terms of item difficulty, item discrimination and distractor analysis. The statistical tests were used to compute the reliability of the test by applying The Kuder-Richardson Formula (KR20). The analysis of 50 test items was computed using Microsoft Office Excel. A descriptive method was applied to describe and examined the data. The research findings showed the test fulfilled the criteria of having content validity which was categorized as a low validity. Meanwhile, the reliability value of the test scores was 0.521010831 (0.52) categorized as lower reliability and revision of test. Through the 50 items examined, there were 21 items that were in need of improvement which were classified into “easy” for the index difficulty and “poor” category for the discriminability by the total 26 items (52%). It means more than 50% of the test items need to be revised as the items do not meet the criteria. It is suggested that in order to measure students’ performance effectively, essential improvement need to be evaluated where items with “poor” discrimination index should be reviewed.    


Sign in / Sign up

Export Citation Format

Share Document