Games and Exercises for Teaching and Research: Exploring How Learning Varies Based on Fidelity and Participant Experience

Author(s):  
Erica Gralla ◽  
Zoe Szajnfarber

It has long been recognized that games are useful in engineering education, and more recently they have also become a common setting for empirical research. Games are useful for both teaching and research because they mimic aspects of reality and require participants to reason within that realistic context, and they allow researchers to study phenomena empirically that are hard to observe in reality. This paper explores what can be learned by students and by researchers, based on the authors’ experience with two sets of games. These games vary in both the experience level of the participants and the “fidelity” or realism of the game itself. Our experience suggests that what can be learned by participants and by researchers depends on both these dimensions. For teaching purposes, inexperienced participants may struggle to connect lessons from medium-fidelity games to the real world. On the other hand, experienced participants may learn more from medium-fidelity games that provide the time and support to practice and reflect on new skills. For research purposes, high-fidelity games are best due to their higher ecological validity, even with inexperienced participants, although experienced participants may enable strong validity in medium-fidelity settings. These findings are based on experience with two games, but provide promising directions for future research.

Author(s):  
M. P. Manser ◽  
P. A. Hancock ◽  
C. A. Kinney ◽  
J. Diaz

The use of the unecological removal research scenario in recent years has been forced because of technological limitations. However, with the advent of three-dimensional modeling programs and high-fidelity graphic systems, the ability to accurately represent real-world situations in computer-generated worlds has become easier, cheaper, and more realistic. A time-to-contact (TTC) experiment is reported in which the manner of removing an approaching vehicle from the environment was manipulated. One scenario, the disappearance condition, featured a traditional, instantaneous removal of a vehicle. The purpose of this research was to determine if a more ecological research scenario, one in which the approaching vehicle becomes occluded by a naturally occurring object (the occlusion condition), influences a driver’s ability to estimate TTC accurately. The available visual information was essentially equivalent in both scenarios. If the level of ecological validity has no effect on estimates of TTC, estimates of TTC between the two scenarios would be expected to be similar. Results, however, showed estimates with 14 percent greater accuracy in the occlusion condition compared with the disappearance condition, implying that researchers have been using a research scenario that biases estimates of TTC. Further, the results of the present findings imply that there are processes that occur in real world settings that have not being accounted for in previous TTC research.


Author(s):  
Nemanja Dobrota ◽  
Aleksandar Stevanovic ◽  
Nikola Mitrovic

Current signal retiming policies are deficient in recognizing the potential of emerging traffic datasets and simulation tools to improve signal timings. Consequently, current practice advocates the use of periodically collected (low-resolution) traffic datasets and deterministic (low-fidelity) simulation tools. When deployed in the field, such signal timings require excessive fine-tuning. The most recent trends promote the use of high-resolution data collected at 10 Hz frequency. While such an approach shows promise, the process heavily relies on specific data sets that are neither widely available nor clearly integrated into the existing signal retiming practices and procedures. Interestingly, data collected in an ongoing fashion and aggregated in several-minute bins (referred to here as medium-resolution) have not received much attention in the traditional retiming procedures. This study examines traditional signal retiming practices to provide a contextual framework for the other retiming alternatives. The authors define and classify different resolutions of various traffic data used in the signal retiming process and propose a signal retiming procedure based on widely available medium-resolution data and high-fidelity simulation modeling. The authors apply the traditional (low-resolution and low-fidelity) and a proposed (medium-resolution and high-fidelity) approach to a 28-intersection corridor in southeastern Florida. Signal timing plans developed from the proposed approach outperformed current plans from field and those plans developed in the traditional approach by reducing the average delay anywhere between 6.5 and 26%. With regard to the number of stops, changes for the traditional and proposed approaches were of much lower significance when compared with the field signal timings. The traveling speeds have been increased by 4.1%–18% by the proposed signal timings and delay was not transferred onto the neighboring streets, as was the case for plans developed by the traditional approach. Development, calibration, and validation of models within the proposed approach are more time-consuming and challenging than the modeling needs of the traditional approach. One direction of future research should address the automation of calibration and validation procedures. The other direction for future research should be related to the field evaluation of proposed signal timing plans.


2021 ◽  
Vol 14 (1) ◽  
pp. 150-174
Author(s):  
Emily R. Weiss ◽  
McWelling Todman ◽  
Özge Pazar ◽  
Sophia Mullens ◽  
Kristin Maurer ◽  
...  

An abundance of empirical research has established that a robust, positive association exists between feelings of boredom and the illusion of temporal slowing. Although state and trait forms of boredom are distinct constructs, the way these variables interact with one another to impact time perception is unknown. To further explore the association between boredom and time perception, a modified replication of a study that examined the impact of discrepancies between expected and perceived time progression on hedonic appraisals was conducted. The paradigm was extended through the inclusion of validated measures of trait and recent state boredom. Seventy-two participants (N = 72, aged 18-52, M = 23.06, SD = 5.73) were led to believe that they would perform an intrinsically unengaging task for 5 (Time Drags), 10 (Real Time), or 15 minutes (Time Flies). Consistent with previous findings, participants in the Time Drags condition reported time as progressing significantly slower than participants in the other two conditions. Moreover, participants in the Time Drags condition rated the task as significantly more aversive than did participants in the Time Flies condition. This association remained significant even when controlling for levels of trait and recent state boredom. However, the Real Time and Time Flies conditions did not differ from one another in terms of task ratings or perceived time progression. Implications of these findings and directions for future research are discussed.


Author(s):  
Julia Anne Yesberg ◽  
Ben Bradford

Collective efficacy is a neighbourhood social process that has important benefits for crime prevention. Policing is thought to be one antecedent to collective efficacy, but the mechanisms by which police activity and officer behaviour are thought to foster collective efficacy are not well understood. This article presents findings from a rapid evidence assessment conducted to take stock of the empirical research on policing and collective efficacy. Thirty-nine studies were identified and examined. Overall, trust in police was the aspect of policing most consistently associated with collective efficacy. There was also some evidence that community policing activities, such as visibility and community engagement, predicted collective efficacy. Police legitimacy, on the other hand, was relatively unrelated to collective efficacy: a finding which suggests perceptions of police linked to the ‘action’ of individual officers may be more enabling of collective efficacy than perceptions of the policing institution as a whole. Implications and directions for future research are discussed.


Author(s):  
Winfred Arthur ◽  
Eric Anthony Day

With a focus on its intersection with the expertise literature, a number of conclusions arise from the present review of the skill/knowledge decay and retention literature. First, decay is more a matter of interference rather than simply the forgetting of information and processes through the passage of time. Second, decay is highly dependent on task and situational factors. Third, decay on complex tasks appears to be smaller than that observed for simple tasks. Fourth, retention is generally stronger with more practice, elaborative rehearsal, and greater mastery—expertise—of the task. Fifth, although related, retention, reacquisition, and transfer are meaningfully distinct. Sixth, there is very limited empirical research that integrates the study of expertise in the context of skill acquisition with the study of decay, adaptable performance, and enhancing retention (or mitigating loss) in complex real-world performance domains. Intersecting these rich yet separate literatures would be of great theoretical and practical value and warrants future research attention.


1997 ◽  
Vol 15 (2) ◽  
pp. 177-207 ◽  
Author(s):  
Vladimir J. KonečNi

The golden section (GS) was investigated in three experiments ( N = 91, 87, and 73 psychology students, respectively), using both traditional methods (line bi-section, production of rectangles), and novel stimuli (contours and cutouts of vases constructed by the GS and non-GS principles) and tasks (the placement of “vases” on an imaginary and a laboratory, purpose-built, mantelpiece). In five different tasks, which varied considerably in technical details, there was absolutely no evidence for the significance of the GS, nor was there a general preference for the GS vases. Instead, the search for balance seemed to motivate the subjects' mantelpiece placement choices, guided by the area (“weight”), rather than the shape, of the vases. In addition, the results cast serious doubt on the generalizability of conclusions based on the research on rectangles to real-world aesthetic objects and choices. Other substantive and methodological issues, especially with regard to the future research on the GS, and to ecological validity, were discussed.


Author(s):  
P. A. Hancock

Objective: To examine the influences of dynamic conspicuity on object recognition and to evaluate the real-world implications of these processes. Background: Conspicuity is the major influence on persons’ abilities to recognize the presence of entities within their environment. Shortfalls in sensory and cognitive conspicuity are implicated in many, if not most, real-world systemic failures. Method: The present observations derive from an overview of relevant empirical research allied to a synthetic integration. From these foundations, I articulate a proposed taxonomy through which to parse the essential dimensions of conspicuity. Results: The taxonomy features three axes related to (a) modality (e.g., visual vs. auditory, etc.), (b) processing directionality (e.g., top-down vs. bottom-up information flow), and finally (c) temporality (i.e., the differences between static vs. dynamic presentations). Conclusion: Existing conspicuity studies have primarily featured static, sensory comparisons. Exploration of the other quadrants of the proposed taxonomy can serve to frame future conspicuity research. This taxonomic description also provides the basis from which to understand failure etiology in a wide spectrum of human–machine systems. Application: Improvements in the understanding of conspicuity can help in all domains of HF/E and can serve to reduce failure in a wide variety of operational contexts.


2020 ◽  
Vol 11 (3) ◽  
pp. 385-404
Author(s):  
Miftahul Hidayah ◽  
Helen Forgasz

This study examined the type of mathematical tasks in two Australian and two Indonesian mathematics textbooks for 7th-grade students. The quantitative data were collected from the coding results of the tasks in the textbooks. The tasks were coded based on six categories: the presentation forms, the cognitive requirements, the contextual features, the information provided, the number of steps required, and the numbers of answers. Both the similarities and differences in the mathematical tasks provided in the selected textbooks were analysed. The coding results reveal that the majority of tasks in both the Australian and Indonesian textbooks were presented in verbal and combined forms. Routine and closed tasks were still dominant in the four textbooks. More than 93% of tasks in the four textbooks had sufficient information for students to solve the problem. One of the Australian textbooks had a higher proportion of tasks with real-world contexts than the other textbooks. One of the Indonesian textbooks showed a high proportion of tasks requiring multiple steps or procedures. These results were used to explore the learning opportunities offered by the textbooks, and the possible influence on students’ performances in international assessments. Some recommendations for the refinement of the textbooks and future research are also outlined at the end of the study.


2014 ◽  
Vol 25 (4) ◽  
pp. 233-238 ◽  
Author(s):  
Martin Peper ◽  
Simone N. Loeffler

Current ambulatory technologies are highly relevant for neuropsychological assessment and treatment as they provide a gateway to real life data. Ambulatory assessment of cognitive complaints, skills and emotional states in natural contexts provides information that has a greater ecological validity than traditional assessment approaches. This issue presents an overview of current technological and methodological innovations, opportunities, problems and limitations of these methods designed for the context-sensitive measurement of cognitive, emotional and behavioral function. The usefulness of selected ambulatory approaches is demonstrated and their relevance for an ecologically valid neuropsychology is highlighted.


2020 ◽  
Author(s):  
Ciara Greene ◽  
Gillian Murphy

Previous research has argued that fake news may have grave consequences for health behaviour, but surprisingly, no empirical data have been provided to support this assumption. This issue takes on new urgency in the context of the coronavirus pandemic. In this large preregistered study (N = 3746) we investigated the effect of exposure to fabricated news stories about COVID-19 on related behavioural intentions. We observed small but measurable effects on some related behavioural intentions but not others – for example, participants who read a story about problems with a forthcoming contact-tracing app reported reduced willingness to download the app. We found no effects of providing a general warning about the dangers of online misinformation on response to the fake stories, regardless of the framing of the warning in positive or negative terms. We conclude with a call for more empirical research on the real-world consequences of fake news.


Sign in / Sign up

Export Citation Format

Share Document