scholarly journals Bicycle kicks and camp sites: Towards a phenomenological theory of game feel with special attention towards ‘rhythm’

Author(s):  
Bo Kampmann Walther ◽  
Lasse Juel Larsen

The goal of the article is to present a theory of game feel inspired by phenomenology. Martin Heidegger’s tool analysis and concept of time ( Sorge), as well as Maurice Merleau-Ponty’s Body-Subject including the related phenomena intentional arc, maximal grip and flow of coping, are of special interest. The aim is to move beyond the descriptive game design take on game feel by inserting the body in a first-person perspective, highlighting a sensuous approach with emphasis on rhythm and controller. We offer a methodological framework for analysing game feel consisting on three levels: ‘Dance’, ‘Learn’ and ‘Inhabit’. Finally, we arrive at an understanding of game feel explaining reasons for player sensitivity towards game feel and clues to why players care so deeply about it.

2018 ◽  
Author(s):  
A.W. de Borst ◽  
M.V. Sanchez-Vives ◽  
M. Slater ◽  
B. de Gelder

AbstractPeripersonal space is the area directly surrounding the body, which supports object manipulation and social interaction, but is also critical for threat detection. In the monkey, ventral premotor and intraparietal cortex support initiation of defensive behavior. However, the brain network that underlies threat detection in human peripersonal space still awaits investigation. We combined fMRI measurements with a preceding virtual reality training from either first or third person perspective to manipulate whether approaching human threat was perceived as directed to oneself or another. We found that first person perspective increased body ownership and identification with the virtual victim. When threat was perceived as directed towards oneself, synchronization of brain activity in the human peripersonal brain network was enhanced and connectivity increased from premotor and intraparietal cortex towards superior parietal lobe. When this threat was nearby, synchronization also occurred in emotion-processing regions. Priming with third person perspective reduced synchronization of brain activity in the peripersonal space network and increased top-down modulation of visual areas. In conclusion, our results showed that after first person perspective training peripersonal space is remapped to the virtual victim, thereby causing the fronto-parietal network to predict intrusive actions towards the body and emotion-processing regions to signal nearby threat.


2016 ◽  
Vol 28 (11) ◽  
pp. 1760-1771 ◽  
Author(s):  
Giulia Bucchioni ◽  
Carlotta Fossataro ◽  
Andrea Cavallo ◽  
Harold Mouras ◽  
Marco Neppi-Modona ◽  
...  

Recent studies show that motor responses similar to those present in one's own pain (freezing effect) occur as a result of observation of pain in others. This finding has been interpreted as the physiological basis of empathy. Alternatively, it can represent the physiological counterpart of an embodiment phenomenon related to the sense of body ownership. We compared the empathy and the ownership hypotheses by manipulating the perspective of the observed hand model receiving pain so that it could be a first-person perspective, the one in which embodiment occurs, or a third-person perspective, the one in which we usually perceive the others. Motor-evoked potentials (MEPs) by TMS over M1 were recorded from first dorsal interosseous muscle, whereas participants observed video clips showing (a) a needle penetrating or (b) a Q-tip touching a hand model, presented either in first-person or in third-person perspective. We found that a pain-specific inhibition of MEP amplitude (a significantly greater MEP reduction in the “pain” compared with the “touch” conditions) only pertains to the first-person perspective, and it is related to the strength of the self-reported embodiment. We interpreted this corticospinal modulation according to an “affective” conception of body ownership, suggesting that the body I feel as my own is the body I care more about.


2021 ◽  
Vol 11 (4) ◽  
pp. 521
Author(s):  
Jonathan Erez ◽  
Marie-Eve Gagnon ◽  
Adrian M. Owen

Investigating human consciousness based on brain activity alone is a key challenge in cognitive neuroscience. One of its central facets, the ability to form autobiographical memories, has been investigated through several fMRI studies that have revealed a pattern of activity across a network of frontal, parietal, and medial temporal lobe regions when participants view personal photographs, as opposed to when they view photographs from someone else’s life. Here, our goal was to attempt to decode when participants were re-experiencing an entire event, captured on video from a first-person perspective, relative to a very similar event experienced by someone else. Participants were asked to sit passively in a wheelchair while a researcher pushed them around a local mall. A small wearable camera was mounted on each participant, in order to capture autobiographical videos of the visit from a first-person perspective. One week later, participants were scanned while they passively viewed different categories of videos; some were autobiographical, while others were not. A machine-learning model was able to successfully classify the video categories above chance, both within and across participants, suggesting that there is a shared mechanism differentiating autobiographical experiences from non-autobiographical ones. Moreover, the classifier brain maps revealed that the fronto-parietal network, mid-temporal regions and extrastriate cortex were critical for differentiating between autobiographical and non-autobiographical memories. We argue that this novel paradigm captures the true nature of autobiographical memories, and is well suited to patients (e.g., with brain injuries) who may be unable to respond reliably to traditional experimental stimuli.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Doerte Kuhrt ◽  
Natalie R. St. John ◽  
Jacob L. S. Bellmund ◽  
Raphael Kaplan ◽  
Christian F. Doeller

AbstractAdvances in virtual reality (VR) technology have greatly benefited spatial navigation research. By presenting space in a controlled manner, changing aspects of the environment one at a time or manipulating the gain from different sensory inputs, the mechanisms underlying spatial behaviour can be investigated. In parallel, a growing body of evidence suggests that the processes involved in spatial navigation extend to non-spatial domains. Here, we leverage VR technology advances to test whether participants can navigate abstract knowledge. We designed a two-dimensional quantity space—presented using a head-mounted display—to test if participants can navigate abstract knowledge using a first-person perspective navigation paradigm. To investigate the effect of physical movement, we divided participants into two groups: one walking and rotating on a motion platform, the other group using a gamepad to move through the abstract space. We found that both groups learned to navigate using a first-person perspective and formed accurate representations of the abstract space. Interestingly, navigation in the quantity space resembled behavioural patterns observed in navigation studies using environments with natural visuospatial cues. Notably, both groups demonstrated similar patterns of learning. Taken together, these results imply that both self-movement and remote exploration can be used to learn the relational mapping between abstract stimuli.


Philosophies ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 5
Author(s):  
S. J. Blodgett-Ford

The phenomenon and ethics of “voting” will be explored in the context of human enhancements. “Voting” will be examined for enhanced humans with moderate and extreme enhancements. Existing patterns of discrimination in voting around the globe could continue substantially “as is” for those with moderate enhancements. For extreme enhancements, voting rights could be challenged if the very humanity of the enhanced was in doubt. Humans who were not enhanced could also be disenfranchised if certain enhancements become prevalent. Voting will be examined using a theory of engagement articulated by Professor Sophie Loidolt that emphasizes the importance of legitimization and justification by “facing the appeal of the other” to determine what is “right” from a phenomenological first-person perspective. Seeking inspiration from the Universal Declaration of Human Rights (UDHR) of 1948, voting rights and responsibilities will be re-framed from a foundational working hypothesis that all enhanced and non-enhanced humans should have a right to vote directly. Representative voting will be considered as an admittedly imperfect alternative or additional option. The framework in which voting occurs, as well as the processes, temporal cadence, and role of voting, requires the participation from as diverse a group of humans as possible. Voting rights delivered by fiat to enhanced or non-enhanced humans who were excluded from participation in the design and ratification of the governance structure is not legitimate. Applying and extending Loidolt’s framework, we must recognize the urgency that demands the impossible, with openness to that universality in progress (or universality to come) that keeps being constituted from the outside.


2021 ◽  
pp. 174702182110092
Author(s):  
Quentin Marre ◽  
Nathalie Huet ◽  
Elodie Labeye

According to embodied cognition theory, cognitive processes are grounded in sensory, motor and emotional systems. This theory supports the idea that language comprehension and access to memory are based on sensorimotor mental simulations, which does indeed explain experimental results for visual imagery. These results show that word memorization is improved when the individual actively simulates the visual characteristics of the object to be learned. Very few studies, however, have investigated the effectiveness of more embodied mental simulations, that is, simulating both the sensory and motor aspects of the object (i.e., motor imagery) from a first-person perspective. The recall performances of 83 adults were analysed in four different conditions: mental rehearsal, visual imagery, third-person motor imagery, and first-person motor imagery. Results revealed a memory efficiency gradient running from low-embodiment strategies (i.e., involving poor perceptual and/or motor simulation) to high-embodiment strategies (i.e., rich simulation in the sensory and motor systems involved in interactions with the object). However, the benefit of engaging in motor imagery, as opposed to purely visual imagery, was only observed when participants adopted the first-person perspective. Surprisingly, visual and motor imagery vividness seemed to play a negligible role in this effect of the sensorimotor grounding of mental imagery on memory efficiency.


Sign in / Sign up

Export Citation Format

Share Document