Towards a Comprehensive Test of Qualitative Reasoning Skill in Design

Author(s):  
Maryam Khorshidi ◽  
Jay Woodward ◽  
Jami J. Shah

A battery of tests for assessing the cognitive skills needed for the conceptual design is being developed. Divergent thinking and visual thinking tests were fully developed and validated previously. This paper focuses on the development of a test on qualitative reasoning skill. Indicators of qualitative reasoning are identified and categorized as: deductive reasoning, inductive reasoning, analogical reasoning, abductive reasoning, and intuitive physics; the derivation of each is based on both cognitive science and empirical studies of design. The paper also considers the metrics for measuring skill levels in different individuals and candidate test items and grading rubric for each skill.

Author(s):  
Maryam Khorshidi ◽  
Jami J. Shah ◽  
Jay Woodward

A battery of tests assessing the cognitive skills needed for the conceptual design is being developed. Tests on Divergent thinking and visual thinking are fully developed and validated. The first version of the qualitative reasoning test has also been developed; this paper focuses on the lessons learned from testing of the first version of the test (alpha version) and the improvements made to it since then. A number of problems were developed for each indicator of the qualitative reasoning skill (deductive reasoning, inductive reasoning, analogical reasoning, and abductive reasoning). Later, a protocol study was done with the problems to make sure that the problems assess the desired skills. The problems were also given to a randomly chosen population of undergraduate senior-level or graduate-level engineering students. Data was collected from the test results on the possible correlations between the problems (e.g. technical and non-technical problems); feedback on clarity, time allocation, and difficulty for each problem was also collected. Based on all of the observed correlations, the average performance of the test takers, and test parameters such as validity, reliability, etc. the beta version of the test is constructed.


Author(s):  
Patrick C. Kyllonen

Reasoning ability refers to the power and effectiveness of the processes and strategies used in drawing inferences, reaching conclusions, arriving at solutions, and making decisions based on available evidence. The topic of reasoning abilities is multidisciplinary—it is studied in psychology (differential and cognitive), education, neuroscience, genetics, philosophy, and artificial intelligence. There are several distinct forms of reasoning, implicating different reasoning abilities. Deductive reasoning involves drawing conclusions from a set of given premises in the form of categorical syllogisms (e.g., all x are y) or symbolic logic (e.g., if p then q). Inductive reasoning involves the use of examples to suggest a rule that can be applied to new instances, invoked, for example, when drawing inferences about a rule that explains a series (of numbers, letters, events, etc.). Abductive reasoning involves arriving at the most likely explanation for a set of facts, such as a medical diagnosis to explain a set of symptoms, or a scientific theory to explain a set of empirical findings. Bayesian reasoning involves computing probabilities on conclusions based on prior information. Analogical reasoning involves coming to an understanding of a new entity through how it relates to an already familiar one. The related idea of case-based reasoning involves solving a problem (a new case) by recalling similar problems encountered in the past (past cases or stored cases) and using what worked for those similar problems to help solve the current one. Some of the key findings on reasoning abilities are that (a) they are important in school, the workplace, and life, (b) there is not a single reasoning ability but multiple reasoning abilities, (c) the ability to reason is affected by the content and context of reasoning, (d) it is difficult to accelerate the development of reasoning ability, and (e) reasoning ability is limited by working memory capacity, and sometimes by heuristics and strategies that are often useful but that can occasionally lead to distorted reasoning. Several topics related to reasoning abilities appear under different headings, such as problem solving, judgment and decision-making, and critical thinking. Increased attention is being paid to reasoning about emotions and reasoning speed. Reasoning ability is and will remain an important topic in education.


2012 ◽  
Vol 134 (2) ◽  
Author(s):  
Jami J. Shah ◽  
Roger E. Millsap ◽  
Jay Woodward ◽  
S. M. Smith

A number of cognitive skills relevant to conceptual design were identified previously. They include divergent thinking (DT), visual thinking (VT), spatial reasoning (SR), qualitative reasoning (QR), and problem formulation (PF). A battery of standardized tests is being developed for these design skills. This paper focuses only on the divergent thinking test. This particular test has been given to over 500 engineering students and a smaller number of practicing engineers. It is designed to evaluate four direct measures (fluency, flexibility, originality, and quality) and four indirect measures (abstractability, afixability, detailability, and decomplexability). The eight questions on the test overlap in some measures and the responses can be used to evaluate several measures independently (e.g., fluency and originality can be evaluated separately from the same idea set). The data on the twenty-three measured variables were factor analyzed using both exploratory and confirmatory procedures. A four-factor solution with correlated (oblique) factors was deemed the best available solution after examining solutions with more factors. The indirect measures did not appear to correlate strongly either among themselves or with the other direct measures. The four-factor structure was then taken into a confirmatory factor analytic procedure that adjusted for the missing data. It was found to provide a reasonable fit. Estimated correlations among the four factors (F) ranged from a high of 0.32 for F1 and F2 to a low of 0.06 for F3 and F4. All factor loadings were statistically significant.


Author(s):  
Jami J. Shah ◽  
Roger E. Millsap ◽  
Jay Woodward ◽  
S. M. Smith

A number of cognitive skills relevant to conceptual design were identified. They include Divergent Thinking, Visual Thinking, Spatial Reasoning, Qualitative Reasoning and Problem Formulation. A battery of standardized tests have been developed for these skills. We have previously reported on the contents and rationale for divergent thinking and visual thinking tests. This paper focuses on data collection and detailed statistical analysis of one test, namely the divergent thinking test. This particular test has been given to over 500 engineering students and a smaller number of practicing engineers. It is designed to evaluate four direct measures (fluency, flexibility, originality, quality) and four indirect measures (abstractability, afixability, detailability, decomplexability). The eight questions on the test overlap in some measures and the responses can be used to evaluate several measures independently (e.g., fluency and originality can be evaluated separately from the same idea set). The data on the 23 measured variables were factor analyzed using both exploratory and confirmatory procedures. Two variables were dropped from these exploratory analyses for reasons explained in the paper. For the remaining 21 variables, a four-factor solution with correlated (oblique) factors was deemed the best available solution after examining solutions with more factors. Five of the 21 variables did not load meaningfully on any of the four factors. These indirect measures did not appear to correlate strongly either among themselves, or with the other direct measures. The remaining 16 variables loaded on four factors as follows: The four factors correspond to the different measures belonging to each of the four questions. In other words, the different fluency, flexibility, or originality variables did not form factors limited to these forms of creative thinking. Instead the analyses showed factors associated with the questions themselves (with the exception of questions corresponding to indirect measures). The above four-factor structure was then taken into a confirmatory factor analytic procedure that adjusted for the missing data. After making some adjustments, the above four-factor solution was found to provide a reasonable fit to the data. Estimated correlations among the four factors (F) ranged from a high of .32 for F1 and F2 to a low of .06 for F3 and F4. All factor loadings were statistically significant.


2014 ◽  
Vol 136 (10) ◽  
Author(s):  
Maryam Khorshidi ◽  
Jami J. Shah ◽  
Jay Woodward

Past studies have identified the following cognitive skills relevant to conceptual design: divergent thinking, spatial reasoning, visual thinking, abstract reasoning, and problem formulation (PF). Standardized tests are being developed to assess these skills. The tests on divergent thinking and visual thinking are fully developed and validated; this paper focuses on the development of a test of abstract reasoning in the context of engineering design. Similar to the two previous papers, this paper reports on the theoretical and empirical basis for skill identification and test development. Cognitive studies of human problem solving and design thinking revealed four indicators of abstract reasoning: qualitative deductive reasoning (DR), qualitative inductive reasoning (IR), analogical reasoning (AnR), and abductive reasoning (AbR). Each of these is characterized in terms of measurable indicators. The paper presents test construction procedures, trial runs, data collection, norming studies, and test refinement. Initial versions of the test were given to approximately 250 subjects to determine the clarity of the test problems, time allocation and to gauge the difficulty level. A protocol study was also conducted to assess test content validity. The beta version was given to approximately 100 students and the data collected was used for norming studies and test validation. Analysis of test results suggested high internal consistency; factor analysis revealed four eigenvalues above 1.0, indicating assessment of four different subskills by the test (as initially proposed by four indicators). The composite Cronbach’s alpha for all of the factors together was found to be 0.579. Future research will be conducted on criterion validity.


Author(s):  
Arindam Basu

In this paper, we introduce the concepts of critically reading research papers and writing of research proposals and reports. Research methods is a general term that includes the processes of observation of the world around the researcher, linking background knowledge with foreground questions, drafting a plan of collection of data and framing theories and hypotheses, testing the hypotheses, and finally, drafting or writing the research to evoke new knowledge. These processes vary with the themes and disciplines that the researcher engages in; nevertheless, common motifs can be found. In this paper, we propose three methods are interlinked: a deductive reasoning process where the structure of the thought can be captured critically; an inductive reasoning method where the researcher can appraise and generate generalisable ideas from observations of the world; and finally, abductive reasoning method where the world can be explained or the phenomena observed can be explained or be accounted for. This step or reasoning is also about framing theories, testing and challenging established knowledge or finding best theories and how theories best fit the observations. We start with a discussion of the different types of statements that one can come across in any scholarly literature or even in lay or semi-serious literature, appraise them, and identify arguments from non-arguments, and explanations from non-explanations. Then we outline three strategies to appraise and identify reasonings in explanations and arguments. We end with a discussion on how to draft a research proposal and a reading/archiving strategy of research.


Author(s):  
Stefan Helmreich ◽  
Sophia Roosth

This chapter examines how natural philosophers and scientists in the eighteenth, nineteenth, and twentieth centuries employed the term “life form.” It asks how life came to have a form, where the term “life form” came from, and what “life form” has come to mean in the contemporary moment, when it is possible to use the term to refer to as-yet-conjectural manifestations that may redefine the very referent of life itself. To map the historical transformation of the term “life form,” the chapter draws on Raymond Williams's 1976 Keywords, in which Williams offered histories of keywords in social theory, detailing the shifting, contested meanings of such terms as “culture,” “nature,” and “ideology.” Using this approach, the chapter identifies a move from deductive reasoning to inductive reasoning to abductive reasoning.


1986 ◽  
Vol 2 (4) ◽  
pp. 473-486 ◽  
Author(s):  
Catherine A. Clement ◽  
D. Midian Kurland ◽  
Ronald Mawby ◽  
Roy D. Pea

Investigations of the cognitive demands of programming can inform teaching and validate claims that important cognitive skills are inherent in programming. Given reports of experts' use of analogical problem solving in programming, the study reported here related analogical reasoning to Logo programming mastery among high school students. Correlational analyses related pretests of analogical reasoning to posttests of programming mastery. As predicted, a significant correlation was found between analogical reasoning and the ability to write subprocedures which can be reused for several different programs. This sophisticated programming skill requires recognition of structural similarities among distinct programming tasks. A final, general discussion considers analogical reasoning skill as a cognitive demand and consequence of programming.


2021 ◽  
Vol 12 ◽  
Author(s):  
Matthias Borgstede ◽  
Marcel Scholz

In this paper, we provide a re-interpretation of qualitative and quantitative modeling from a representationalist perspective. In this view, both approaches attempt to construct abstract representations of empirical relational structures. Whereas quantitative research uses variable-based models that abstract from individual cases, qualitative research favors case-based models that abstract from individual characteristics. Variable-based models are usually stated in the form of quantified sentences (scientific laws). This syntactic structure implies that sentences about individual cases are derived using deductive reasoning. In contrast, case-based models are usually stated using context-dependent existential sentences (qualitative statements). This syntactic structure implies that sentences about other cases are justifiable by inductive reasoning. We apply this representationalist perspective to the problems of generalization and replication. Using the analytical framework of modal logic, we argue that the modes of reasoning are often not only applied to the context that has been studied empirically, but also on a between-contexts level. Consequently, quantitative researchers mostly adhere to a top-down strategy of generalization, whereas qualitative researchers usually follow a bottom-up strategy of generalization. Depending on which strategy is employed, the role of replication attempts is very different. In deductive reasoning, replication attempts serve as empirical tests of the underlying theory. Therefore, failed replications imply a faulty theory. From an inductive perspective, however, replication attempts serve to explore the scope of the theory. Consequently, failed replications do not question the theory per se, but help to shape its boundary conditions. We conclude that quantitative research may benefit from a bottom-up generalization strategy as it is employed in most qualitative research programs. Inductive reasoning forces us to think about the boundary conditions of our theories and provides a framework for generalization beyond statistical testing. In this perspective, failed replications are just as informative as successful replications, because they help to explore the scope of our theories.


2021 ◽  
Vol 10 (1) ◽  
pp. 351
Author(s):  
Mu'jizatin Fadiana ◽  
Yulaikah Yulaikah ◽  
Lajianto Lajianto

The ability to prove formal mathematics is an important ability that must be mastered by undergraduate prospective mathematics teachers. However, students who are prospective mathematics teachers have difficulty in constructing proof in mathematics courses. Therefore, this study aims to explore the tendency of mathematical proof methods for prospective mathematics teachers in second year lectures. The method used in this research is quantitative descriptive research. Participants in this study were 30 prospective mathematics teachers at a tertiary institution in Tuban, East Java. The research instrument is a simple task of compiling mathematical evidence. The results of the study were analyzed using the classification of types of proof by Miyazaki, namely classifying the types of deductive and inductive reasoning. The results showed that prospective mathematics teachers had a greater tendency to use deductive reasoning than using inductive reasoning. Type A proof is the most common type of proof. In addition, around 70% of prospective teachers still experience difficulties in compiling evidentiary tasks.


Sign in / Sign up

Export Citation Format

Share Document