video stimulus
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 9)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew C. Gallup ◽  
Mariska E. Kret ◽  
Omar Tonsi Eldakar ◽  
Julia Folz ◽  
Jorg J. M. Massen

AbstractConsiderable variation exists in the contagiousness of yawning, and numerous studies have been conducted to investigate the proximate mechanisms involved in this response. Yet, findings within the psychological literature are mixed, with many studies conducted on relatively small and homogeneous samples. Here, we aimed to replicate and extend upon research suggesting a negative relationship between psychopathic traits and yawn contagion in community samples. In the largest study of contagious yawning to date (N = 458), which included both university students and community members from across 50 nationalities, participants completed an online study in which they self-reported on their yawn contagion to a video stimulus and completed four measures of psychopathy: the primary and secondary psychopathy scales from the Levenson Self-Report Psychopathy Scale (LSRPS), the psychopathy construct from the Dirty Dozen, and the Psychopathic Personality Traits Scale (PPTS). Results support previous findings in that participants that yawned contagiously tended to score lower on the combined and primary measures of psychopathy. That said, tiredness was the strongest predictor across all models. These findings align with functional accounts of spontaneous and contagious yawning and a generalized impairment in overall patterns of behavioral contagion and biobehavioral synchrony among people high in psychopathic traits.


2021 ◽  
Vol 8 ◽  
Author(s):  
Masahiro Shiomi ◽  
Xiqian Zheng ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

In this study, we implemented a model with which a robot expressed such complex emotions as heartwarming (e.g., happy and sad) or horror (fear and surprise) by its touches and experimentally investigated the effectiveness of the modeled touch behaviors. Robots that can express emotions through touching behaviors increase their interaction capabilities with humans. Although past studies achieved ways to express emotions through a robot’s touch, such studies focused on expressing such basic emotions as happiness and sadness and downplayed these complex emotions. Such studies only proposed a model that expresses these emotions by touch behaviors without evaluations. Therefore, we conducted the experiment to evaluate the model with participants. In the experiment, they evaluated the perceived emotions and empathies from a robot’s touch while they watched a video stimulus with the robot. Our results showed that the touch timing before the climax received higher evaluations than touch timing after for both the scary and heartwarming videos.


Author(s):  
Mara Stadler ◽  
Philipp Doebler ◽  
Barbara Mertins ◽  
Renate Delucchi Danhier

AbstractThis paper presents a model that allows group comparisons of gaze behavior while watching dynamic video stimuli. The model is based on the approach of Coutrot and Guyader (2017) and allows linear combinations of feature maps to form a master saliency map. The feature maps in the model are, for example, the dynamically salient contents of a video stimulus or predetermined areas of interest. The model takes into account temporal aspects of the stimuli, which is a crucial difference to other common models. The multi-group extension of the model introduced here allows to obtain relative importance plots, which visualize the effect of a specific feature of a stimulus on the attention and visual behavior for two or more experimental groups. These plots are interpretable summaries of data with high spatial and temporal resolution. This approach differs from many common methods for comparing gaze behavior between natural groups, which usually only include single-dimensional features such as the duration of fixation on a particular part of the stimulus. The method is illustrated by contrasting a sample of a group of persons with particularly high cognitive abilities (high achievement on IQ tests) with a control group on a psycholinguistic task on the conceptualization of motion events. In the example, we find no substantive differences in relative importance, but more exploratory gaze behavior in the highly gifted group. The code, videos, and eye-tracking data we used for this study are available online.


2021 ◽  
Vol 12 ◽  
Author(s):  
Patrick C. Trettenbrein ◽  
Emiliano Zaccarella

Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movements. Computer vision methods such as OpenPose enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The OpenPoseR package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using OpenPose, allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.


2020 ◽  
Author(s):  
Patrick C. Trettenbrein ◽  
Emiliano Zaccarella

Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movements. Computer vision methods such as OpenPose enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The OpenPoseR package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using OpenPose, allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2502
Author(s):  
Pilar Marqués-Sánchez ◽  
Cristina Liébana-Presa ◽  
José Alberto Benítez-Andrades ◽  
Raquel Gundín-Gallego ◽  
Lorena Álvarez-Barrio ◽  
...  

During university studies of nursing, it is important to develop emotional skills for their impact on academic performance and the quality of patient care. Thermography is a technology that could be applied during nursing training to evaluate emotional skills. The objective is to evaluate the effect of thermography as the tool for monitoring and improving emotional skills in student nurses through a case study. The student was subjected to different emotions. The stimuli applied were video and music. The process consisted of measuring the facial temperatures during each emotion and stimulus in three phases: acclimatization, stimulus, and response. Thermographic data acquisition was performed with an FLIR E6 camera. The analysis was complemented with the environmental data (temperature and humidity). With the video stimulus, the start and final forehead temperature from testing phases, showed a different behavior between the positive (joy: 34.5 °C–34.5 °C) and negative (anger: 36.1 °C–35.1 °C) emotions during the acclimatization phase, different from the increase experienced in the stimulus (joy: 34.7 °C–35.0 °C and anger: 35.0 °C–35.0 °C) and response phases (joy: 35.0 °C–35.0 °C and anger: 34.8 °C–35.0 °C). With the music stimulus, the emotions showed different patterns in each phase (joy: 34.2 °C–33.9 °C–33.4 °C and anger: 33.8 °C–33.4 °C–33.8 °C). Whenever the subject is exposed to a stimulus, there is a thermal bodily response. All of the facial areas follow a common thermal pattern in response to the stimulus, with the exception of the nose. Thermography is a technique suitable for the stimulation practices in emotional skills, given that it is non-invasive, it is quantifiable, and easy to access.


2020 ◽  
Vol 127 (2) ◽  
pp. 317-346
Author(s):  
Martin Kanovský ◽  
Martina Baránková ◽  
Júlia Halamová ◽  
Bronislava Strnádelová ◽  
Jana Koróniová

The aim of the study was to describe the spontaneous facial expressions elicited by viewers of a compassionate video in terms of the respondents’ muscular activity of single facial action units (AUs). We recruited a convenience sample of 111 undergraduate psychology students, aged 18-25 years ( M = 20.53; SD = 1.62) to watch (at home alone) a short video stimulus eliciting compassion, and we recorded the respondents’ faces using webcams. We used both a manual analysis, based on the Facial Action Coding System, and an automatic analysis of the holistic recognition of facial expressions as obtained through EmotionID software. Manual facial analysis revealed that, during the compassionate moment of the video stimulus, AUs 1 =  inner-brow raiser, 4 =  brow lowerer, 7 =  lids tight, 17 =  chin raiser, 24 =  lip presser, and 55 =  head tilt left occurred more often than other AUs. These same AUs also occurred more often during the compassionate moment than during the baseline recording. Consistent with these findings, automatic facial analysis during the compassionate moment showed that anger occurred more often than other emotions; during the baseline moment, contempt occurred less often than other emotions. Further research is necessary to fully describe the facial expression of compassion.


2019 ◽  
Vol 19 ◽  
Author(s):  
Fung Chiat Loo ◽  
Fung Ying Loo ◽  
Yan Piaw Chua

Most studies of music and sports relate to the ergogenic effect of synchronization between music and movement in repetitive sports activities. As in dance, music is clearly important for sports routines that involve choreography. This study performs an experiment involving a rhythmic gymnastics routine to investigate whether increasing the congruence between music and movement enhances the quality of sports routines from a musical perspective. In preparing the video stimulus, the original music accompaniment was replaced with a new composition to increase the congruence between music and movement using six musical parameters that parallel dance, including tempo, rhythm, phrasing, accent, direction and dynamic. Fifty-two undergraduate music majors participated in the study and evaluated two videos of the same routine, one with the original music and the other with the new music. The participants completed a three-part questionnaire: the first part evaluates the perceived congruence between music and movement in terms of the six parameters, the second part evaluates acrobatic qualities, and the third part evaluates athletic qualities. The results show that the intended congruence was perceived as significantly improved in the routine with the new accompaniment, and both the acrobatic and sports qualities were also perceived as significantly improved.   Keywords: perceived congruence, sports routine, music and movement, choreomusical, music and sports


2019 ◽  
Vol 85 (874) ◽  
pp. 18-00390-18-00390 ◽  
Author(s):  
Takumi KAWAMURA ◽  
Toru MIZUYA
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document