Rethinking the Comprehensive Test on Qualitative Reasoning for Designers

Author(s):  
Maryam Khorshidi ◽  
Jami J. Shah ◽  
Jay Woodward

A battery of tests assessing the cognitive skills needed for the conceptual design is being developed. Tests on Divergent thinking and visual thinking are fully developed and validated. The first version of the qualitative reasoning test has also been developed; this paper focuses on the lessons learned from testing of the first version of the test (alpha version) and the improvements made to it since then. A number of problems were developed for each indicator of the qualitative reasoning skill (deductive reasoning, inductive reasoning, analogical reasoning, and abductive reasoning). Later, a protocol study was done with the problems to make sure that the problems assess the desired skills. The problems were also given to a randomly chosen population of undergraduate senior-level or graduate-level engineering students. Data was collected from the test results on the possible correlations between the problems (e.g. technical and non-technical problems); feedback on clarity, time allocation, and difficulty for each problem was also collected. Based on all of the observed correlations, the average performance of the test takers, and test parameters such as validity, reliability, etc. the beta version of the test is constructed.

Author(s):  
Maryam Khorshidi ◽  
Jay Woodward ◽  
Jami J. Shah

A battery of tests for assessing the cognitive skills needed for the conceptual design is being developed. Divergent thinking and visual thinking tests were fully developed and validated previously. This paper focuses on the development of a test on qualitative reasoning skill. Indicators of qualitative reasoning are identified and categorized as: deductive reasoning, inductive reasoning, analogical reasoning, abductive reasoning, and intuitive physics; the derivation of each is based on both cognitive science and empirical studies of design. The paper also considers the metrics for measuring skill levels in different individuals and candidate test items and grading rubric for each skill.


2012 ◽  
Vol 134 (2) ◽  
Author(s):  
Jami J. Shah ◽  
Roger E. Millsap ◽  
Jay Woodward ◽  
S. M. Smith

A number of cognitive skills relevant to conceptual design were identified previously. They include divergent thinking (DT), visual thinking (VT), spatial reasoning (SR), qualitative reasoning (QR), and problem formulation (PF). A battery of standardized tests is being developed for these design skills. This paper focuses only on the divergent thinking test. This particular test has been given to over 500 engineering students and a smaller number of practicing engineers. It is designed to evaluate four direct measures (fluency, flexibility, originality, and quality) and four indirect measures (abstractability, afixability, detailability, and decomplexability). The eight questions on the test overlap in some measures and the responses can be used to evaluate several measures independently (e.g., fluency and originality can be evaluated separately from the same idea set). The data on the twenty-three measured variables were factor analyzed using both exploratory and confirmatory procedures. A four-factor solution with correlated (oblique) factors was deemed the best available solution after examining solutions with more factors. The indirect measures did not appear to correlate strongly either among themselves or with the other direct measures. The four-factor structure was then taken into a confirmatory factor analytic procedure that adjusted for the missing data. It was found to provide a reasonable fit. Estimated correlations among the four factors (F) ranged from a high of 0.32 for F1 and F2 to a low of 0.06 for F3 and F4. All factor loadings were statistically significant.


Author(s):  
Jami J. Shah ◽  
Roger E. Millsap ◽  
Jay Woodward ◽  
S. M. Smith

A number of cognitive skills relevant to conceptual design were identified. They include Divergent Thinking, Visual Thinking, Spatial Reasoning, Qualitative Reasoning and Problem Formulation. A battery of standardized tests have been developed for these skills. We have previously reported on the contents and rationale for divergent thinking and visual thinking tests. This paper focuses on data collection and detailed statistical analysis of one test, namely the divergent thinking test. This particular test has been given to over 500 engineering students and a smaller number of practicing engineers. It is designed to evaluate four direct measures (fluency, flexibility, originality, quality) and four indirect measures (abstractability, afixability, detailability, decomplexability). The eight questions on the test overlap in some measures and the responses can be used to evaluate several measures independently (e.g., fluency and originality can be evaluated separately from the same idea set). The data on the 23 measured variables were factor analyzed using both exploratory and confirmatory procedures. Two variables were dropped from these exploratory analyses for reasons explained in the paper. For the remaining 21 variables, a four-factor solution with correlated (oblique) factors was deemed the best available solution after examining solutions with more factors. Five of the 21 variables did not load meaningfully on any of the four factors. These indirect measures did not appear to correlate strongly either among themselves, or with the other direct measures. The remaining 16 variables loaded on four factors as follows: The four factors correspond to the different measures belonging to each of the four questions. In other words, the different fluency, flexibility, or originality variables did not form factors limited to these forms of creative thinking. Instead the analyses showed factors associated with the questions themselves (with the exception of questions corresponding to indirect measures). The above four-factor structure was then taken into a confirmatory factor analytic procedure that adjusted for the missing data. After making some adjustments, the above four-factor solution was found to provide a reasonable fit to the data. Estimated correlations among the four factors (F) ranged from a high of .32 for F1 and F2 to a low of .06 for F3 and F4. All factor loadings were statistically significant.


Author(s):  
Patrick C. Kyllonen

Reasoning ability refers to the power and effectiveness of the processes and strategies used in drawing inferences, reaching conclusions, arriving at solutions, and making decisions based on available evidence. The topic of reasoning abilities is multidisciplinary—it is studied in psychology (differential and cognitive), education, neuroscience, genetics, philosophy, and artificial intelligence. There are several distinct forms of reasoning, implicating different reasoning abilities. Deductive reasoning involves drawing conclusions from a set of given premises in the form of categorical syllogisms (e.g., all x are y) or symbolic logic (e.g., if p then q). Inductive reasoning involves the use of examples to suggest a rule that can be applied to new instances, invoked, for example, when drawing inferences about a rule that explains a series (of numbers, letters, events, etc.). Abductive reasoning involves arriving at the most likely explanation for a set of facts, such as a medical diagnosis to explain a set of symptoms, or a scientific theory to explain a set of empirical findings. Bayesian reasoning involves computing probabilities on conclusions based on prior information. Analogical reasoning involves coming to an understanding of a new entity through how it relates to an already familiar one. The related idea of case-based reasoning involves solving a problem (a new case) by recalling similar problems encountered in the past (past cases or stored cases) and using what worked for those similar problems to help solve the current one. Some of the key findings on reasoning abilities are that (a) they are important in school, the workplace, and life, (b) there is not a single reasoning ability but multiple reasoning abilities, (c) the ability to reason is affected by the content and context of reasoning, (d) it is difficult to accelerate the development of reasoning ability, and (e) reasoning ability is limited by working memory capacity, and sometimes by heuristics and strategies that are often useful but that can occasionally lead to distorted reasoning. Several topics related to reasoning abilities appear under different headings, such as problem solving, judgment and decision-making, and critical thinking. Increased attention is being paid to reasoning about emotions and reasoning speed. Reasoning ability is and will remain an important topic in education.


2014 ◽  
Vol 136 (10) ◽  
Author(s):  
Maryam Khorshidi ◽  
Jami J. Shah ◽  
Jay Woodward

Past studies have identified the following cognitive skills relevant to conceptual design: divergent thinking, spatial reasoning, visual thinking, abstract reasoning, and problem formulation (PF). Standardized tests are being developed to assess these skills. The tests on divergent thinking and visual thinking are fully developed and validated; this paper focuses on the development of a test of abstract reasoning in the context of engineering design. Similar to the two previous papers, this paper reports on the theoretical and empirical basis for skill identification and test development. Cognitive studies of human problem solving and design thinking revealed four indicators of abstract reasoning: qualitative deductive reasoning (DR), qualitative inductive reasoning (IR), analogical reasoning (AnR), and abductive reasoning (AbR). Each of these is characterized in terms of measurable indicators. The paper presents test construction procedures, trial runs, data collection, norming studies, and test refinement. Initial versions of the test were given to approximately 250 subjects to determine the clarity of the test problems, time allocation and to gauge the difficulty level. A protocol study was also conducted to assess test content validity. The beta version was given to approximately 100 students and the data collected was used for norming studies and test validation. Analysis of test results suggested high internal consistency; factor analysis revealed four eigenvalues above 1.0, indicating assessment of four different subskills by the test (as initially proposed by four indicators). The composite Cronbach’s alpha for all of the factors together was found to be 0.579. Future research will be conducted on criterion validity.


2013 ◽  
Vol 135 (7) ◽  
Author(s):  
Jami J. Shah ◽  
Jay Woodward ◽  
Steven M. Smith

A number of cognitive skills relevant to conceptual design have been previously identified: divergent thinking, visual thinking, spatial reasoning, qualitative reasoning, and problem formulation. A battery of standardized test has been developed for each of these skills. This is the second paper in a series of papers on testing individual skill level differences in engineers and engineering students. In the first paper, we reported on the theoretical and empirical basis for divergent thinking test, as well as, on test formulation, data collection, norming studies, and statistical validation of that test. This paper focuses similarly on the efforts related to the visual thinking and spatial reasoning in engineering context. We have decomposed visual thinking into six categories: visual comprehension including perceptual speed, visual memory (that is, the visual memory system), visual synthesis mental image manipulation/transformation, spatial reasoning, and graphical expression/elaboration. We discuss the theoretical basis of a comprehensive test for engineers, test composition, trial runs, and computation of reliability measures. The alpha version was given to a small set of subjects to determine clarity of the questions and gauge difficulty level. The beta version was used for norming and test validation from over 500 samples that included engineering students and a smaller number of practicing engineers. Construct validation was achieved through basing the construction of our instrument off other well-known measures of visual thinking, while content validity was assured through thoroughly sampling the domain of visual thinking and including a variety of items both pertinent and specific to the engineering design process. The factor analysis reveals that there are possibly two eigenvalues above 1.0, an indication that it is a stable and accurate instrument. We emphasize that these tests are not just dependent on native abilities, but on education and experience; design skills are teachable and learnable.


2014 ◽  
Vol 1 (1) ◽  
pp. 111-114
Author(s):  
Lal Mohan Baral ◽  
Ramzan Muhammad ◽  
Claudiu Vasile Kifor ◽  
Ioan Bondrea

AbstractProblem-based learning as a teaching tool is now used globally in many areas of higher education. It provides an opportunity for students to explore technical problems from a system-level perspective and to be self-directed life-long learner which is mandatory for equipping engineering students with the skill and knowledge. This paper presents a case study illustrating the effectiveness of implemented Problem-based learning (PBL) during five semesters in the undergraduate programs of Textile Engineering in Ahsanullah University of Science and Technology (AUST). An assessment has been done on the basis of feedback from the students as well as their employers by conducting an empirical survey for the evaluation of PBL impact to enhance the student's competencies. The Evaluations indicate that students have achieved remarkable competencies through PBL practices which helped them to be competent in their professional life.


Author(s):  
Mattia Vettorello ◽  
Boris Eisenbart ◽  
Charlie Ranscombe

AbstractTo be successful in innovation, organisations need to be dynamically adaptable to novel situations to avoid getting ‘left behind’. Yet, they face vast uncertainties stemming from unforeseeable technological shifts or future user and market behaviour, making strategic decision-making on innovation an extremely difficult task. Decision-makers thus increasingly try to control or shape the future, rather than foresee it. This includes thinking ahead and generating potential pathways that will make an innovation viable. This captures the essence of designerly ways of thinking in reasoning toward ‘what might be’. Extant literature has been reviewed that discusses alternative strategies how this future-oriented thinking can be applied to become better at selecting novel ideas for development. We observe parallels between divergent thinking, abductive reasoning, analogising and lateral thinking suggested by different authors in this process. The paper continues to propose how these key mechanisms can be embedded within an existing framework for decision-making under uncertainty, the ‘OODA Loop’, which has seen increasing uptake in such decision-making scenarios.


2019 ◽  
Vol 12 (1) ◽  
pp. 110 ◽  
Author(s):  
Miguel Romero Di Biasi ◽  
Guillermo Eliecer Valencia ◽  
Luis Guillermo Obregon

This article presents the application of a new educational thermodynamic software called MOLECULARDISORDER, based on graphical user interfaces created in Matlab® to promote critical thinking in youth engineering students, by means of the energy and entropy balance application in different systems. Statistics of the results obtained by the youth students are shown to determine the influence of the software in a regular course in thermodynamics to promote critical thinking. Two case studies were done by the students, where parameters such as temperature of the fluid and metal surfaces, pressure of the system, mass of the fluid and solid, volume, and velocity of the fluid are used to obtain output variables such as enthalpy, entropy, changes in entropy, entropy production, and energy transfer in the chosen system. Four cognitive skills were considered to evaluate the cognitive competencies of interpreting, arguing and proposing, and interacting with the different graphical user interfaces; these cognitive skills (CS) were argumentative claim (CS1), modeling (CS2), interpreting data/information (CS3), and organization (CS4). Student´s T-test was used to compare the degree of difficulty of each criterion. The case studies were evaluated first without using the software and then with the use of the software to determine the significant effect of the software quantitatively. A population of 130 youth students was taken to perform the statistical analysis with a level of significance of 5%. With the help of the software, the students obtained an improvement when performing case study 1 since the p-value obtained was 0.03, indicating that there are significant differences between the results before and after taking the software. The overall averages of the grades for case study 1 had an increase after using the software from 3.74 to 4.04. The overall averages for case study 2 were also higher after taking the software from 3.44 to 3.75.


Sign in / Sign up

Export Citation Format

Share Document