Case-method teaching: advantages and disadvantages in organizational training

2018 ◽  
Vol 37 (9/10) ◽  
pp. 711-720 ◽  
Author(s):  
Naghi Radi Afsouran ◽  
Morteza Charkhabi ◽  
Seyed Ali Siadat ◽  
Reza Hoveida ◽  
Hamid Reza Oreyzi ◽  
...  

Purpose The purpose of this paper is to introduce case-method teaching (CMT), its advantages and disadvantages for the process of organizational training within organizations, as well as to compare its advantages and disadvantages with current training methods. Design/methodology/approach The authors applied a systematic literature review to define, identify and compare CMT with current methods. Findings In CMT, participants get involved with real-world challenges from an action perspective instead of analyzing them from a distance. Also, different reactions of the participants to the same challenge aid instructors to identify the individual differences of participants toward the challenge. Although CMT is still not considered as a popular organizational training method, the advantages of CMT may encourage organizational instructors to further apply it. Improving the long-term memory, enhancing the quality of decision making and understanding the individual differences of individuals are the advantages of CMT. Research limitations/implications A lack of sufficient empirical researchers and the high cost of conducting this method may prevent practitioners to apply it. Originality/value The review suggested that CMT is able to bring dilemmas from the real world into training settings. Also, it helps organizations to identify the individual reactions before they make a decision.

Author(s):  
Harold Stanislaw

Two hundred forty subjects working alone and in pairs performed three different versions of a task similar to industrial inspection: a rating task and spatial and temporal two-alternative forced-choice (2AFC) tasks. Performance was worse on the rating task than on the 2AFC tasks, and the spatial and temporal 2AFC tasks were performed equally well. These results could signify that performance is impaired more by demands made on long-term memory than by demands made on perception and sensory memory, or that asking subjects to compare items is fundamentally different from, and easier than, asking subjects to judge items in absolute terms. Individual differences in performance were marked, but performance was inconsistent across different versions of the inspection task. When subjects worked in pairs, performance was comparable to that obtained by requiring items to pass two inspections by individual subjects. However, a single inspection by subject pairs required less time than two inspections by individual subjects. The practical implications of these findings are discussed.


2018 ◽  
Vol 373 (1756) ◽  
pp. 20170291 ◽  
Author(s):  
Sarah Dalesman

Individual differences in cognitive ability are predicted to covary with other behavioural traits such as exploration and boldness. Selection within different habitats may act to either enhance or break down covariance among traits; alternatively, changing the environmental context in which traits are assessed may result in plasticity that alters trait covariance. Pond snails, Lymnaea stagnalis , from two laboratory strains (more than 20 generations in captivity) and F1 laboratory reared from six wild populations were tested for long-term memory and exploration traits (speed and thigmotaxis) following maintenance in grouped and isolated conditions to determine if isolation: (i) alters memory and exploration; and (ii) alters covariance between memory and exploration. Populations that demonstrated strong memory formation (longer duration) under grouped conditions demonstrated weaker memory formation and reduced both speed and thigmotaxis following isolation. In wild populations, snails showed no relationship between memory and exploration in grouped conditions; however, following isolation, exploration behaviour was negatively correlated with memory, i.e. slow-explorers showing low levels of thigmotaxis formed stronger memories. Laboratory strains demonstrated no covariance among exploration traits and memory independent of context. Together these data demonstrate that the relationship between cognition and exploration traits can depend on both habitat and context-specific trait plasticity. This article is part of the theme issue ‘Causes and consequences of individual differences in cognitive abilities’.


2009 ◽  
pp. 465-482
Author(s):  
Christof van Nimwegen ◽  
Hermina Tabachneck-Schijf ◽  
Herre van Oostendorp

How can we design technology that suits human cognitive needs? In this chapter, we review research on the effects of externalizing information on the interface versus requiring people to internalize it. We discuss the advantages and disadvantages of externalizing information. Further, we discuss some of our own research investigating how externalizing or not externalizing information in program interfaces influences problem-solving performance. In general, externalization provides information relevant to immediate task execution visibly or audibly in the interface. Thus, remembering certain task-related knowledge becomes unnecessary, which relieves working memory. Examples are visual feedback aids such as “graying out” nonapplicable menu items. On the contrary, when certain needed task-related information is not externalized on the interface, it needs to be internalized, stored in working memory and long-term memory. In many task situations, having the user acquire more knowledge of the structure of the task or its underlying rules is desirable. We examined the hypothesis that while externalization will yield better performance during initial learning, internalization will yield a better performance later. We furthermore expected internalization to result in better knowledge, and expected it to provoke less trial-and-error behavior. We conducted an experiment where we compared an interface with certain information externalized versus not externalizing it, and measured performance and knowledge. In a second session 8 months later, we investigated what was left of the participants’ knowledge and skills, and presented them with a transfer task. The results showed that requiring internalization can yield advantages over having all information immediately at hand. This shows that using cognitive findings to enhance the effectiveness of software (especially software with specific purposes) can make a valuable contribution to the field of human-computer interaction.


Author(s):  
Stephen Grossberg

A historical overview is given of interdisciplinary work in physics and psychology by some of the greatest nineteenth-century scientists, and why the fields split, leading to a century of ferment before the current scientific revolution in mind-brain sciences began to understand how we autonomously adapt to a changing world. New nonlinear, nonlocal, and nonstationary intuitions and laws are needed to understand how brains make minds. Work of Helmholtz on vision illustrates why he left psychology. His concept of unconscious inference presaged modern ideas about learning, expectation, and matching that this book scientifically explains. The fact that brains are designed to control behavioral success has profound implications for the methods and models that can unify mind and brain. Backward learning in time, and serial learning, illustrate why neural networks are a natural language for explaining brain dynamics, including the correct functional stimuli and laws for short-term memory (STM), medium-term memory (MTM), and long-term memory (LTM) traces. In particular, brains process spatial patterns of STM and LTM, not just individual traces. A thought experiment leads to universal laws for how neurons, and more generally all cellular tissues, process distributed STM patterns in cooperative-competitive networks without experiencing contamination by noise or pattern saturation. The chapter illustrates how thinking this way leads to unified and principled explanations of huge databases. A brief history of the advantages and disadvantages of the binary, linear, and continuous-nonlinear sources of neural models is described, and how models like Deep Learning and the author’s contributions fit into it.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Gianluca Scuderi ◽  
Daniela Iacovello ◽  
Federica Pranno ◽  
Pasquale Plateroti ◽  
Luca Scuderi

The purpose of this paper is to review the surgical options available for the management of pediatric glaucoma, to evaluate their advantages and disadvantages together with their long-term efficacy, all with the intent to give guidelines to physicians on which elements are to be considered when taking a surgical decision. Currently there is a range of surgical procedures that are being used for the management of pediatric glaucoma. Within these, some are completely new approaches, while others are improvements of the more traditional procedures. Throughout this vast range of surgical options, angle surgery remains the first choice in mild cases and both goniotomy and trabeculotomy have good success rates. Trabeculectomy with or without mitomycin C (MMC) is preferred in refractory cases, in aphakic eyes, and in older children. GDIs have a good success rate in aphakic eyes. Nonpenetrating deep sclerectomy is still rarely used; nevertheless the results of ongoing studies are encouraging. The different clinical situations should always be weighed against the risks associated with the procedures for the individual patients. Glaucomatous progression can occur many years after its stabilization and at any time during the follow-up period; for this reason life-long assessment is necessary.


2019 ◽  
Vol 28 (6) ◽  
pp. 607-613
Author(s):  
Kathleen B. McDermott ◽  
Christopher L. Zerr

Most research on long-term memory uses an experimental approach whereby participants are assigned to different conditions, and condition means are the measures of interest. This approach has demonstrated repeatedly that conditions that slow the rate of learning tend to improve later retention. A neglected question is whether aggregate findings at the level of the group (i.e., slower learning tends to improve retention) translate to the level of individual people. We identify a discrepancy whereby—across people—slower learning tends to coincide with poorer memory. The positive relation between learning rate (speed of learning) and retention (amount remembered after a delay) across people is referred to as learning efficiency. A more efficient learner can acquire information faster and remember more of it over time. We discuss potential characteristics of efficient learners and consider future directions for research.


Author(s):  
Stoo Sepp ◽  
Steven J. Howard ◽  
Sharon Tindall-Ford ◽  
Shirley Agostinho ◽  
Fred Paas

In 1956, Miller first reported on a capacity limitation in the amount of information the human brain can process, which was thought to be seven plus or minus two items. The system of memory used to process information for immediate use was coined “working memory” by Miller, Galanter, and Pribram in 1960. In 1968, Atkinson and Shiffrin proposed their multistore model of memory, which theorized that the memory system was separated into short-term memory, long-term memory, and the sensory register, the latter of which temporarily holds and forwards information from sensory inputs to short term-memory for processing. Baddeley and Hitch built upon the concept of multiple stores, leading to the development of the multicomponent model of working memory in 1974, which described two stores devoted to the processing of visuospatial and auditory information, both coordinated by a central executive system. Later, Cowan’s theorizing focused on attentional factors in the effortful and effortless activation and maintenance of information in working memory. In 1988, Cowan published his model—the scope and control of attention model. In contrast, since the early 2000s Engle has investigated working memory capacity through the lens of his individual differences model, which does not seek to quantify capacity in the same way as Miller or Cowan. Instead, this model describes working memory capacity as the interplay between primary memory (working memory), the control of attention, and secondary memory (long-term memory). This affords the opportunity to focus on individual differences in working memory capacity and extend theorizing beyond storage to the manipulation of complex information. These models and advancements have made significant contributions to understandings of learning and cognition, informing educational research and practice in particular. Emerging areas of inquiry include investigating use of gestures to support working memory processing, leveraging working memory measures as a means to target instructional strategies for individual learners, and working memory training. Given that working memory is still debated, and not yet fully understood, researchers continue to investigate its nature, its role in learning and development, and its implications for educational curricula, pedagogy, and practice.


Heliyon ◽  
2020 ◽  
Vol 6 (10) ◽  
pp. e05260
Author(s):  
David Bestue ◽  
Luis M. Martínez ◽  
Alex Gomez-Marin ◽  
Miguel A. Gea ◽  
Jordi Camí

2011 ◽  
Vol 23 (6) ◽  
pp. 768-779 ◽  
Author(s):  
Philip A. Allen ◽  
Kevin Kaut ◽  
Elsa Baena ◽  
Mei-Ching Lien ◽  
Eric Ruthruff

Sign in / Sign up

Export Citation Format

Share Document