Optimizing Video Clips in Educational Materials

Author(s):  
Aleš Oujezdský

Abstract The use of videos from digital camcorders has become a standard in education in recent years. The curriculum is easily accessible and appeals to a wider audience. The lessons use videos of various physical processes and chemical experiments. However there can be problems with this format. The video quality is often degraded in the final stage when the video is being prepared for placement in education. These include teaching materials in the form of web pages, elearning courses or flash multimedia objects. The final product of editing video from a digital camcorder is a DVD video. However, if we want to transfer this to the Web or other educational material, it is necessary to remove non-square pixels, interlaced video and choose the appropriate compression. For these operations, there are many interpolation algorithms (nearest neighbour, bilinear interpolation, bicubic interpolation), filter deinterlacing (wave, bob, blend), and compression tools. By selecting appropriate settings for these parameters, the video material can be optimized while maintaining the highest possible image quality. The final step before publishing the video is its conversion into one of the used codecs. Codec’s settings will largely impact the final quality and size of the video-clip.

2019 ◽  
Vol 63 (4) ◽  
pp. 689-712
Author(s):  
K. Rothermich ◽  
O. Caivano ◽  
L.J. Knoll ◽  
V. Talwar

Interpreting other people’s intentions during communication represents a remarkable challenge for children. Although many studies have examined children’s understanding of, for example, sarcasm, less is known about their interpretation. Using realistic audiovisual scenes, we invited 124 children between 8 and 12 years old to watch video clips of young adults using different speaker intentions. After watching each video clip, children answered questions about the characters and their beliefs, and the perceived friendliness of the speaker. Children’s responses reveal age and gender differences in the ability to interpret speaker belief and social intentions, especially for scenarios conveying teasing and prosocial lies. We found that the ability to infer speaker belief of prosocial lies and to interpret social intentions increases with age. Our results suggest that children at the age of 8 years already show adult-like abilities to understand literal statements, whereas the ability to infer specific social intentions, such as teasing and prosocial lies, is still developing between the age of 8 and 12 years. Moreover, girls performed better in classifying prosocial lies and sarcasm as insincere than boys. The outcomes expand our understanding of how children observe speaker intentions and suggest further research into the development of teasing and prosocial lie interpretation.


2021 ◽  
pp. 174702182110480
Author(s):  
Tochukwu Onwuegbusi ◽  
Frouke Hermens ◽  
Todd Hogue

Recent advances in software and hardware have allowed eye tracking to move away from static images to more ecologically relevant video streams. The analysis of eye tracking data for such dynamic stimuli, however, is not without challenges. The frame-by-frame coding of regions of interest (ROIs) is labour-intensive and computer vision techniques to automatically code such ROIs are not yet mainstream, restricting the use of such stimuli. Combined with the more general problem of defining relevant ROIs for video frames, methods are needed that facilitate data analysis. Here, we present a first evaluation of an easy-to-implement data-driven method with the potential to address these issues. To test the new method, we examined the differences in eye movements of self-reported politically left- or right-wing leaning participants to video clips of left- and right-wing politicians. The results show that our method can accurately predict group membership on the basis of eye movement patterns, isolate video clips that best distinguish people on the political left–right spectrum, and reveal the section of each video clip with the largest group differences. Our methodology thereby aids the understanding of group differences in gaze behaviour, and the identification of critical stimuli for follow-up studies or for use in saccade diagnosis.


2015 ◽  
Vol 52 ◽  
pp. 601-713 ◽  
Author(s):  
Haonan Yu ◽  
N. Siddharth ◽  
Andrei Barbu ◽  
Jeffrey Mark Siskind

We present an approach to simultaneously reasoning about a video clip and an entire natural-language sentence. The compositional nature of language is exploited to construct models which represent the meanings of entire sentences composed out of the meanings of the words in those sentences mediated by a grammar that encodes the predicate-argument relations. We demonstrate that these models faithfully represent the meanings of sentences and are sensitive to how the roles played by participants (nouns), their characteristics (adjectives), the actions performed (verbs), the manner of such actions (adverbs), and changing spatial relations between participants (prepositions) affect the meaning of a sentence and how it is grounded in video. We exploit this methodology in three ways. In the first, a video clip along with a sentence are taken as input and the participants in the event described by the sentence are highlighted, even when the clip depicts multiple similar simultaneous events. In the second, a video clip is taken as input without a sentence and a sentence is generated that describes an event in that clip. In the third, a corpus of video clips is paired with sentences which describe some of the events in those clips and the meanings of the words in those sentences are learned. We learn these meanings without needing to specify which attribute of the video clips each word in a given sentence refers to. The learned meaning representations are shown to be intelligible to humans.


2021 ◽  
Author(s):  
Norma-Jane E. Thompson

Currently, the World Wide Web allows web pages to be produced in most written languages. Many deaf people, however, use a visual-spatial language with no written equivalent (e.g. American Sign Language). SignLink Studio, a software tool for designing sign language web pages, allows for hyperlinking within video clips so that sign language only web pages can be created. However, this tool does not allow for other interactive elements such as online forms. In this thesis, a model for an online sign language form is proposed and evaluated. A study consisting of 22 participants was conducted to examine whether there were differences in performance of preferences between sign language forms and text forms, and between two presentation styles (all-at-once versus one-at-a-time). The results showed that there was no clear performance advantage between sign language and text; however, participants were interested in having online questions presented in sign language. Also, there were no advantages in performance or preferences between presentation styles.


REPRESENTAMEN ◽  
2018 ◽  
Vol 3 (01) ◽  
Author(s):  
Rian Rian ◽  
Edy Sudaryanto ◽  
Judhi Hari Wibowo

This research is motivated by the development of the spread of symbols of satanism in the modern era that mushroomed in various mass media, especially video clips to give and deliver messages. One of the top bands named Dewa 19 has symbols associated with symbols of satanism and spread the symbol through the media video clips of the songs of God 19. The focus of this study is the meaning of video clip symbols scattered on each video clip kara band Dewa 19. The theory used is Charles Sanders Peirce Semiotics theory which has a triadic model and trichotomy concept consisting of Representamen, interpretant, and object. The research method used in this research is qualitative research with descriptive type. The results of the research found that the meaning of the symbols scattered in each of the video clips of the band Dewa 19 is the result of the symbolic representation of satanism, among others: Horus's Eye, Pyramid Terpanchung, Chessboard Chess, Photo of Satan Church Founder, God Ra, ANKH SymbolKeywords: semiotics, satanism, Dewa 19, symbols.


2021 ◽  
Vol 23 (5) ◽  
pp. 652-669
Author(s):  
Oskar Lindwall ◽  
Michael Lynch

This paper is an analysis of a video clip of an interview between a reporter and ice hockey player following a game in which the player was involved in a hard collision with a member of the opposing team. The paper explores blame attribution and how participants claim and disclaim expertise in a way that supports or undermines assertions to have correctly seen and assessed the actions shown on tape. Our analysis focuses on the video of the interview, and it also examines relevant video clips of the collision and various commentaries about the identities of the characters and their actions shown on the videos. In brief, the study is a third-order investigation of recorded-actions-under-analysis. It uses the videos and commentaries as “perspicuous phenomena” that illuminate and complicate how the members’ own action category analysis is bound up with issues of expertise, evidence, and blame.


Author(s):  
Jia Chen ◽  
Cui-xia Ma ◽  
Hong-an Wang ◽  
Hai-yan Yang ◽  
Dong-xing Teng

As the use of instructional video is becoming a key component of e-learning, there is an increasing need for a distributed system which supports collaborative video annotation and organization. In this paper, the authors construct a distributed environment on the top of NaradaBrokering to support collaborative operations on video material when users are located in different places. The concept of video annotation is enriched, making it a powerful media to improve the instructional video organizing and viewing. With panorama based and interpolation based methods, all related users can annotate or organize videos simultaneously. With these annotations, a video organization structure is consequently built through linking them with other video clips or annotations. Finally, an informal user study was conducted and result shows that this system improves the efficiency of video organizing and viewing and enhances user’s participating into the design process with good user experience.


Author(s):  
Neil C. Rowe

Captions are text that describes some other information; they are especially useful for describing nontext media objects (images, audio, video, and software). Captions are valuable metadata for managing multimedia, since they help users better understand and remember (McAninch, Austin, & Derks, 1992-1993) and permit better indexing of media. Captions are essential for effective data mining of multimedia data, since only a small amount of text in typical documents with multimedia—1.2% in a survey of random World Wide Web pages (Rowe, 2002)—describes the media objects. Thus standard Web browsers do poorly at finding media without knowledge of captions. Multimedia information is increasingly common in documents as computer technology improves in speed and ability to handle it, and people need multimedia for a variety of purposes like illustrating educational materials and preparing news stories. Captions are also valuable because nontext media rarely specify internally the creator, date, or spatial and temporal context, and cannot convey linguistic features like negation, tense, and indirect reference. Furthermore, experiments with users of multimediaretrieval systems show a wide range of needs (Sutcliffe, Hare, Doubleday, & Ryan, 1997), but a focus on media meaning rather than appearance (Armitage & Enser, 1997). This suggests that content analysis of media is unnecessary for many retrieval situations, which is fortunate, because it is often considerably slower and more unreliable than caption analysis. But using captions requires finding them and understanding them. Many captions are not clearly identified, and the mapping from captions to media objects is rarely easy. Nonetheless, the restricted semantics of media and captions can be exploited.


Transfers ◽  
2012 ◽  
Vol 2 (1) ◽  
pp. 1-4
Author(s):  
Gijs Mom ◽  
Georgine Clarsen ◽  
Cotten Seiler

At Eindhoven University of Technology, which has a modest reputation for collecting contemporary art, an exhibition of large machines and poetic video clips by father and son Van Bakel invites passersby to reflect on mobility. Gerrit van Bakel, who died more than a quarter century ago, became known for his Tarim Machine, a vehicle that moves at such a low speed that it almost does not matter whether it moves or not. The propulsion principle—for those who love technology—rests on the dilatation energy of oil in tubes propelling (if propelling is the right word …) the contraption a couple of centimeters over a hundred years or so, as long as there is change in temperature to trigger the dilatation. Emphasizing his father’s insights, Michiel van Bakel, exhibits a video clip of a horse and rider galloping over a square in Rotterdam, where the position and camera work are operated so that the horse seems to turn around its axis while the environment rotates at a different tempo. Mobility, these Dutch artists convey, is often not what it seems to be.


Author(s):  
Beth Archibald Tang

At least 15% of the American population has a disability (Kaye, 1998); some estimate it is as high as one in five. For research studies, the United States government usually defines the term disability as a limitation in a person’s major life activities during daily living, working, and attending school (Job Accommodation Network, 1992).1 Assistive technologies—the tools that help individuals complete their daily tasks—serve as adjuncts that help to bridge the gap between dependence and self-reliance. Webmasters2 have their tools, too. They use software that enhance the sites and make them interesting. While Web usability specialists place emphasis on completing tasks, the purpose of some Web sites may be more about evoking a “wow” response, and less about imparting information that visitors can use. On occasion, being able to access these Web pages requires that users go to a third-party Web site and download plug-ins to listen to an audio file, watch a video clip, or read downloaded documents. For people with disabilities, however, many of the Web sites inadvertently establish barriers that could be prevented.


Sign in / Sign up

Export Citation Format

Share Document