Video Clips as a Data Source for Safety Performance

Author(s):  
Colin Mackenzie ◽  
Yan Xiao ◽  
Peter Hu ◽  
F. Jacob Seagull ◽  
Camille Hammond ◽  
...  

Improved safety is an important goal, but there is difficulty in gathering data and identifying practices that lessen the margin of patient safety in real dynamic complex medical workplaces. Video clips as data are a rich source to examine safety performance. Video clips have utility for participants to review their activities and for analysts to extract quantitative data. Focusing video data collection around brief, risky but beneficial tasks, to illustrate patterns of use that occur in Trauma Centers during patients' resuscitation, can simplify participation consent, confidentiality and data analysis problems. However such video clip acquisition (5–15 minute duration) does not appear to compromise the quality of the content, that can facilitate identification of team performance, communication, ergonomic, and systems factors affecting patient safety. Comparisons of task performance under two levels of task urgency was particularly revealing of areas where patient safety performance can be improved and allowed identification of preventive strategies to minimize the effects of safety infractions.

Author(s):  
Paul McIlvenny

Consumer versions of the passive 360° and stereoscopic omni-directional camera have recently come to market, generating new possibilities for qualitative video data collection. This paper discusses some of the methodological issues raised by collecting, manipulating and analysing complex video data recorded with 360° cameras and ambisonic microphones. It also reports on the development of a simple, yet powerful prototype to support focused engagement with such 360° recordings of a scene. The paper proposes that we ‘inhabit’ video through a tangible interface in virtual reality (VR) in order to explore complex spatial video and audio recordings of a single scene in which social interaction took place. The prototype is a software package called AVA360VR (‘Annotate, Visualise, Analyse 360° video in VR’). The paper is illustrated through a number of video clips, including a composite video of raw and semi-processed multi-cam recordings, a 360° video with spatial audio, a video comprising a sequence of static 360° screenshots of the AVA360VR interface, and a video comprising several screen capture clips of actual use of the tool. The paper discusses the prototype’s development and its analytical possibilities when inhabiting spatial video and audio footage as a complementary mode of re-presenting, engaging with, sharing and collaborating on interactional video data.


Author(s):  
Lindayana ◽  
Arifuddin ◽  
Halus Mandala

This study was conducted aiming at examining: (1) the divergent principles of politeness in students’ directive speech act (2) factors affecting politeness and impoliteness in verbal and non-verbal directive speech act produced by students at grade X in Senior High School 1 Mataram in the learning process. The subject of this study are teachers teaching Bahasa Indonesia, English, Economy, History, Math, Religion, Civic, and Science, and all students at Grade X of Science 1, Science 3 and Social 2 in Senior High School 1 Mataram. This study is a descriptive qualitative research. The data source in this study is the number of utterances produced by students and teachers in the learning process. The data were collected through observation. This study revealed that: (1) there were divergent principles of politeness in participants’ directive speech act namely single and multiple divergent principles of politeness affected by speaker intentionally accused addressees, intentionally uttered by neglecting the context, was protective to other arguments, showed emotional feeling, given critiques in impolite words and mocked at other; and (2) there were factors affecting politeness and impoliteness in verbal and non-verbal directive speech act produced by students in learning process namely linguistic factor and non-linguistic factor.


1999 ◽  
Vol 17 (5) ◽  
pp. 309-315 ◽  
Author(s):  
Edwin Sawacha ◽  
Shamil Naoum ◽  
Daniel Fong

2019 ◽  
Vol 63 (4) ◽  
pp. 689-712
Author(s):  
K. Rothermich ◽  
O. Caivano ◽  
L.J. Knoll ◽  
V. Talwar

Interpreting other people’s intentions during communication represents a remarkable challenge for children. Although many studies have examined children’s understanding of, for example, sarcasm, less is known about their interpretation. Using realistic audiovisual scenes, we invited 124 children between 8 and 12 years old to watch video clips of young adults using different speaker intentions. After watching each video clip, children answered questions about the characters and their beliefs, and the perceived friendliness of the speaker. Children’s responses reveal age and gender differences in the ability to interpret speaker belief and social intentions, especially for scenarios conveying teasing and prosocial lies. We found that the ability to infer speaker belief of prosocial lies and to interpret social intentions increases with age. Our results suggest that children at the age of 8 years already show adult-like abilities to understand literal statements, whereas the ability to infer specific social intentions, such as teasing and prosocial lies, is still developing between the age of 8 and 12 years. Moreover, girls performed better in classifying prosocial lies and sarcasm as insincere than boys. The outcomes expand our understanding of how children observe speaker intentions and suggest further research into the development of teasing and prosocial lie interpretation.


2020 ◽  
Vol 10 (2) ◽  
pp. 108-14
Author(s):  
Majed M Moosa ◽  
Leo P. Oriet ◽  
Abdulrahman M Khamaj

Introduction: Research indicate that construction site accidents are a global concern, and rates are rapidly increasing. In developing countries such as Saudi Arabia, safety issues are frequently ignored, and little is known about their causes. Objectives: This study aimed to shed light on factors causing accidents in Saudi Arabian construction companies. Methods: An online detailed survey, using Google Form, of accident features was distributed randomly to potential employees in 35 construction companies in Saudi Arabia, where one of the top administrators or safety officers were required to respond to the survey. It was conducted from 1st June to 31st August, 2013. The safety practices and perceptions of accident causes were assessed. Results: The response rate was 63%. Over half of the surveyed organizations encountered all of the selected accident types. While 19 (86%) of the construction companies maintained the equipment regularly, 15 (68%) had regular maintenance staff and 13 (59%) inspected the equipment before use. Although 18 (82%) of the workers were supplied with personal protective equipment (PPE), only 12 (55%) emphasized its use and offered site orientation for new employees.  In the last part of the survey, respondents were requested to rate 25 factors affecting safety performance at the construction sites on a scale of 1 to 5, with 5 being the most important. The three most important factors of poor safety performance were the firm's top leaders, a lack of training, and the reckless operation of equipment. Conclusion: Changing attitudes of surrounding safety culture have the potential to significantly improve safety outcomes in the Saudi Arabian construction industry. Two Saudi Arabian corporations, Saudi Aramco and Saudi Chevron Petrochemical provide a positive model for increasing construction safety in the country, but there is a paucity of industry-level data. Further scholarly attention is strongly indicated.


2021 ◽  
pp. 174702182110480
Author(s):  
Tochukwu Onwuegbusi ◽  
Frouke Hermens ◽  
Todd Hogue

Recent advances in software and hardware have allowed eye tracking to move away from static images to more ecologically relevant video streams. The analysis of eye tracking data for such dynamic stimuli, however, is not without challenges. The frame-by-frame coding of regions of interest (ROIs) is labour-intensive and computer vision techniques to automatically code such ROIs are not yet mainstream, restricting the use of such stimuli. Combined with the more general problem of defining relevant ROIs for video frames, methods are needed that facilitate data analysis. Here, we present a first evaluation of an easy-to-implement data-driven method with the potential to address these issues. To test the new method, we examined the differences in eye movements of self-reported politically left- or right-wing leaning participants to video clips of left- and right-wing politicians. The results show that our method can accurately predict group membership on the basis of eye movement patterns, isolate video clips that best distinguish people on the political left–right spectrum, and reveal the section of each video clip with the largest group differences. Our methodology thereby aids the understanding of group differences in gaze behaviour, and the identification of critical stimuli for follow-up studies or for use in saccade diagnosis.


Sign in / Sign up

Export Citation Format

Share Document