video database
Recently Published Documents


TOTAL DOCUMENTS

216
(FIVE YEARS 27)

H-INDEX

22
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Laura Israel ◽  
Philipp Paukner ◽  
Lena Schiestel ◽  
Klaus Diepold ◽  
Felix D. Schönbrodt

The Open Library for Affective Videos (OpenLAV) is a new video database for experimental emotion induction. The 188 videos (mean duration: 40 s; range: 12–71 s) have a CC-BY license. Ratings for valence, arousal, several appraisals, and emotion labels were assessed from 434 US-American participants in an online study (on average 70 ratings per video), along with personality traits from the raters (Big 5 personality dimensions and several motive dispositions). The OpenLAV is able to induce a large variety of different emotions, but the videos vary in uniformity of emotion induction. Based on different variability metrics, we recommend videos for the most uniform induction of different emotions. Moreover, the predictive power of personality traits on emotion ratings was analyzed using a machine-learning approach. In contrast to previous research, no effects of personality on the emotional experience were found.


2021 ◽  
Vol 108 (Supplement_6) ◽  
Author(s):  
A Gendia ◽  
S Korambayil ◽  
A Cota ◽  
I Finlay ◽  
M Clarke ◽  
...  

Abstract Aim This report aims to evaluate the use of an AI video analytics platform in laparoscopic cholecystectomy (LC) based on the achievement of the critical view of safety (CVS) and to assess its ability to correctly comment on CVS. Method Touch surgery video platform, an AI-video based analytic tool, was screened for laparoscopic cholecystectomy in our institute and analysed from April 2019 till October 2020. Data collected by the AI included identification of the critical view of safety and time needed to achieve CVS. A reviewer graded the LC according to Nasser grading and evaluated the ability of AI to identify the CVS. Results 66 LC were included from our video database. CVS was achieved in only 56% (37/66) in all LC videos included. Mean time spent to attend CVS from start of dissection of Calot’s triangle 16.8 (±13.6) mins. 26 (39.4%) LCs were Nasser grade 2 and 20 (30.3%) each were grade 1 and grade 3. There was no significant difference between number of CVS obtained between all grades. Time spent to achieve CVS from dissecting Calot’s triangle were the longest in grade 3 LCs (28.4±17.4 mins) with significant difference between all 3 grades. Finally, the platform correctly commented on CVS in 92.4% of the all LC videos. Conclusions AI video analytics can provide a useful tool to assess laparoscopic cholecystectomies and the critical view of safety. Additionally, more studies should explore the use of the platfrom and integrate the results with the clinical outcomes.


2021 ◽  
Vol 5 (1) ◽  
pp. 54-71
Author(s):  
Wai Cheong Jacky Pow ◽  
Kwok Hung Lai

Microteaching and reflection remains an important technique that pre-service student teachers can use to practice their teaching in a safe environment. However, improvements in teaching are not guaranteed without the support and feedback from peers. Previous studies suggest that a learning community supported by information technology promotes improved pedagogical decisions. This study aimed to examine whether virtual learning communities can facilitate student teachers’ reflection upon their teaching practice. A video database with both text- and voice-comment functionalities was designed to facilitate the process of giving peer feedback and improve the quality of teaching practice. Student teachers’ experiences in using the video database were collected through a questionnaire survey and feedback recorded within the database. Findings indicated that student teachers demonstrated a better understanding of concepts and theories relevant to the teaching of the chosen language skill area. While only some student teachers reflected on their reflective teaching practice more effectively with voice-comment features, most of them did peer evaluation of relevant principles and techniques used in their microteaching. Although feedback on the comment functionalities was divided, student teachers trusted that the microteaching videos with their own reflection and peer feedback were good evidence of their learning outcomes. Future research should examine what types of peer feedback in virtual learning communities may work more effectively in enhancing the quality of reflective teaching practice.


2021 ◽  
Vol 12 ◽  
Author(s):  
Juliana Gioia Negrão ◽  
Ana Alexandra Caldas Osorio ◽  
Rinaldo Focaccia Siciliano ◽  
Vivian Renne Gerber Lederman ◽  
Elisa Harumi Kozasa ◽  
...  

Background: This study developed a photo and video database of 4-to-6-year-olds expressing the seven induced and posed universal emotions and a neutral expression. Children participated in photo and video sessions designed to elicit the emotions, and the resulting images were further assessed by independent judges in two rounds.Methods: In the first round, two independent judges (1 and 2), experts in the Facial Action Coding System, firstly analysed 3,668 emotions facial expressions stimuli from 132 children. Both judges reached 100% agreement regarding 1,985 stimuli (124 children), which were then selected for a second round of analysis between judges 3 and 4.Results: The result was 1,985 stimuli (51% of the photographs) were produced from 124 participants (55% girls). A Kappa index of 0.70 and an accuracy of 73% between experts were observed. Lower accuracy was found for emotional expression by 4-year-olds than 6-year-olds. Happiness, disgust and contempt had the highest agreement. After a sub-analysis evaluation of all four judges, 100% agreement was reached for 1,381 stimuli which compound the ChildEFES database with 124 participants (59% girls) and 51% induced photographs. The number of stimuli of each emotion were: 87 for neutrality, 363 for happiness, 170 for disgust, 104 for surprise, 152 for fear, 144 for sadness, 157 for anger 157, and 183 for contempt.Conclusions: The findings show that this photo and video database can facilitate research on the mechanisms involved in early childhood recognition of facial emotions in children, contributing to the understanding of facial emotion recognition deficits which characterise several neurodevelopmental and psychiatric disorders.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 947
Author(s):  
Dong Seop Kim ◽  
Yu Hwan Kim ◽  
Kang Ryoung Park

Existing studies have shown that effective extraction of multi-scale information is a crucial factor directly related to the increase in performance of semantic segmentation. Accordingly, various methods for extracting multi-scale information have been developed. However, these methods face problems in that they require additional calculations and vast computing resources. To address these problems, this study proposes a grouped dilated convolution module that combines existing grouped convolutions and atrous spatial pyramid pooling techniques. The proposed method can learn multi-scale features more simply and effectively than existing methods. Because each convolution group has different dilations in the proposed model, they have receptive fields of different sizes and can learn features corresponding to these receptive fields. As a result, multi-scale context can be easily extracted. Moreover, optimal hyper-parameters are obtained from an in-depth analysis, and excellent segmentation performance is derived. To evaluate the proposed method, open databases of the Cambridge Driving Labeled Video Database (CamVid) and the Stanford Background Dataset (SBD) are utilized. The experimental results indicate that the proposed method shows a mean intersection over union of 73.15% based on the CamVid dataset and 72.81% based on the SBD, thereby exhibiting excellent performance compared to other state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document