creativity assessment
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 41)

H-INDEX

10
(FIVE YEARS 3)

2021 ◽  
pp. 001698622110618
Author(s):  
Selcuk Acar ◽  
Kelly Berthiaume ◽  
Katalin Grajzel ◽  
Denis Dumas ◽  
Charles “Tedd” Flemister ◽  
...  

In this study, we applied different text-mining methods to the originality scoring of the Unusual Uses Test (UUT) and Just Suppose Test (JST) from the Torrance Tests of Creative Thinking (TTCT)–Verbal. Responses from 102 and 123 participants who completed Form A and Form B, respectively, were scored using three different text-mining methods. The validity of these scoring methods was tested against TTCT’s manual-based scoring and a subjective snapshot scoring method. Results indicated that text-mining systems are applicable to both UUT and JST items across both forms and students’ performance on those items can predict total originality and creativity scores across all six tasks in the TTCT-Verbal. Comparatively, the text-mining methods worked better for UUT than JST. Of the three text-mining models we tested, the Global Vectors for Word Representation (GLoVe) model produced the most reliable and valid scores. These findings indicate that creativity assessment can be done quickly and at a lower cost using text-mining approaches.


2021 ◽  
pp. 879-888
Author(s):  
Ileana Bodini ◽  
Mariasole Bannò ◽  
Diego Paderno ◽  
Gabriele Baronio ◽  
Stefano Uberti ◽  
...  

2021 ◽  
Author(s):  
Selina Weiss ◽  
Oliver Wilhelm ◽  
Patrick Kyllonen

The assessment of creativity presents major challenges. The many competing and complementary ideas on measuring creativity have resulted in a wide diversity of measures, making it difficult for potential users to decide on their appropriateness. Prior research has proposed creativity assessment taxonomies, but we argue that these have shortcomings because they often were not designed to (a) assess the essential assessment features and (b) are insufficiently specified for reliably categorizing extant measures. Based on prior categorization approaches, we propose a new framework for categorizing creativity measures including the following attributes: (a) measurement approach (self-report, other-report, ability tests), (b) construct (e.g., creative interests and attitudes, creative achievements, divergent thinking), (c) data type generated (e.g., questionnaire data vs. accomplishments counts), (d) prototypical scoring method (e.g., consensual assessment technique; CAT), and (e) psychometric problems. We identified 228 creativity measures appearing in the literature since 1900 and classified each measure according to their task attributes by two independent raters (rater agreement Cohen’s kappa .83 to 1.00 for construct). We provide a summary of convergent validity evidence and psychometric shortcomings. We conclude with recommendations for using the taxonomy and some psychometric desiderata for future research.


2021 ◽  
Author(s):  
David H Cropley ◽  
Rebecca L Marrone

One of the abiding challenges in creativity research is assessment. Objectively scored tests of creativity such as the Torrance Tests of Creativity (TTCT) and the Test of Creative Thinking - Drawing Production (TCT-DP) offer high levels of reliability and validity but are slow and expensive to administer and score. As a result, many creativity researchers default to simpler and faster self-report measures of creativity and related constructs (e.g., creative self-efficacy, openness). Recent research, however, has begun to explore the use of computational approaches to address these limitations. Examples include the Divergent Association Task (DAT) that uses computational methods to rapidly assess the semantic distance of words, as a proxy for divergent thinking. To date, however, no research appears to have emerged that uses methods drawn from the field of artificial intelligence to assess existing objective, figural (i.e., drawing) tests of creativity. This paper describes the application of machine learning, in the form of a convolutional neural network, to the assessment of a figural creativity test – the TCT-DP. The approach shows excellent accuracy and speed, eliminating traditional barriers to the use of these objective, figural creativity tests and opening new avenues for automated creativity assessment.


2021 ◽  
pp. 1-27
Author(s):  
Janet Rafner ◽  
Michael Mose Biskjær ◽  
Blanka Zana ◽  
Steven Langsford ◽  
Carsten Bergenholtz ◽  
...  

2021 ◽  
pp. 111-121
Author(s):  
Donald J. Treffinger ◽  
Patricia F. Schoonover ◽  
Edwin C. Selby

2021 ◽  
pp. 223-234
Author(s):  
Meihua Qian ◽  
Jonathan A. Plucker

2021 ◽  
Author(s):  
Kristen M. Edwards ◽  
Aoran Peng ◽  
Scarlett R. Miller ◽  
Faez Ahmed

Abstract A picture is worth a thousand words, and in design metric estimation, a word may be worth a thousand features. Pictures are awarded this worth because of their ability to encode a plethora of information. When evaluating designs, we aim to capture a range of information as well, information including usefulness, uniqueness, and novelty of a design. The subjective nature of these concepts makes their evaluation difficult. Despite this, many attempts have been made and metrics developed to do so, because design evaluation is integral to innovation and the creation of novel solutions. The most common metrics used are the consensual assessment technique (CAT) and the Shah, Vargas-Hernandez, and Smith (SVS) method. While CAT is accurate and often regarded as the “gold standard,” it heavily relies on using expert ratings as a basis for judgement, making CAT expensive and time consuming. Comparatively, SVS is less resource-demanding, but it is often criticized as lacking sensitivity and accuracy. We aim to take advantage of the distinct strengths of both methods through machine learning. More specifically, this study seeks to investigate the possibility of using machine learning to facilitate automated creativity assessment. The SVS method results in a text-rich dataset about a design. In this paper we utilize these textual design representations and the deep semantic relationships that words and sentences encode, to predict more desirable design metrics, including CAT metrics. We demonstrate the ability of machine learning models to predict design metrics from the design itself and SVS Survey information. We demonstrate that incorporating natural language processing (NLP) improves prediction results across all of our design metrics, and that clear distinctions in the predictability of certain metrics exist. Our code and additional information about our work are available at http://decode.mit.edu/projects/nlp-design-eval/.


2021 ◽  
Vol 1 ◽  
pp. 263-272
Author(s):  
Yuan Yin ◽  
Ji Han ◽  
Shu Huang ◽  
Haoyu Zuo ◽  
Peter Childs

AbstractThis paper asked participants to assess four selected expert-rated Taiwan International Student Design Competition (TISDC) products using four methods: Consensual Assessment Technique (CAT), Creative Product Semantic Scale (CPSS), Product Creativity Measurement Instrument (PCMI), and revised Creative Solution Diagnosis Scale (rCSDS). The results revealed that, between experts and non-experts, the ranking results by the CAT and CPSS were the same, while the ranking results of the rCSDS were different. The CAT, CPSS, and TISDC methods provided the same results indicating that raters may return the same results on creativity assessment, and the results are not affected by the selected methods.If it is necessary to use non-experts to assess creativity and the creativity results are expected to be the same with that of experts, asking non-expert raters to use CPSS to assess creativity and then ranking the creativity score is more reliable. The study offers a contribution to the creativity domain on deciding which methods may be more reliable from a comparison perspective.


Author(s):  
Анастасия Александровна Собянина

Рассматриваются различные подходы к определению понятия «творческий потенциал». Представлены результаты исследования творческого потенциала младших школьников на занятиях по созданию мультипликационной анимации. Выдвинуто предположение, что включение в образовательный процесс занятий по созданию мультипликационной анимации способствует формированию творческого потенциала учащихся. Результаты экспериментального исследования, проведенного с двумя группами учащихся 2-3-х классов (104 человека) с помощью комплекса методик (опросник креативности Дж. Рензули, анкета для оценки уровня школьной мотивации Н.Г. Лускановой, опросник Г. Дэвиса, Тест дивергентного (творческого) мышления Ф. Вильямса), позволили предположить, что мультипликационная анимация является перспективным направлением в процессе формирования творческого потенциала младших школьников. Various approaches to the definition of the concept of «creative potential» are considered. The article presents the results of a study of the creative potential of primary school students in the classroom for creating animated animation. It is suggested that the inclusion of classes on creating animated animation in the educational process contributes to the formation of the creative potential of students. The results of an experimental study conducted with two groups of students in grades 2-3, 104 people, using a set of methods (the creativity questionnaire of J. Renzuli, the questionnaire for assessing the level of school motivation of N. G. Luskanova, the questionnaire of G. Davis, The Creativity Assessment Packet (CAP) of F. Williams), suggested that animated animation is a promising direction in the process of forming the creative potential of primary school students.


Sign in / Sign up

Export Citation Format

Share Document