scholarly journals Audience reaction movie trailers and the Paranormal Activity franchise

2014 ◽  
Vol 18 ◽  
Author(s):  
Alexander Swanson

This article addresses the concept and growing practice of audience reaction movie trailers, specifically for films in the horror genre. Popularized by the Paranormal Activity series of films, these trailers primarily utilize green night-vision video footage of a movie theater audience reacting to the film being advertised, yet also consist of webcam recordings of screaming fans, documentary-style B-roll footage of audiences filing into preview screenings with high levels of anticipation, and close-up shots of spectator facial expressions, accompanied by no footage whatsoever from the film being advertised. In analyzing these audience-centric promotional paratexts, my aim is to reveal them as attempting to sell and legitimize the experiential, communal, and social qualities of the theatrical movie viewing experience while at the same time calling for increased fan investment in both physical and online spaces. Through the analysis of audience reaction trailers, this article hopes to both join and engender conversations about horror fan participation, the nature of anticipatory texts as manipulative, and the current state of horror gimmickry in the form of the promotional paratext.

Author(s):  
Omar Shaikh ◽  
Stefano Bonino

The Colourful Heritage Project (CHP) is the first community heritage focused charitable initiative in Scotland aiming to preserve and to celebrate the contributions of early South Asian and Muslim migrants to Scotland. It has successfully collated a considerable number of oral stories to create an online video archive, providing first-hand accounts of the personal journeys and emotions of the arrival of the earliest generation of these migrants in Scotland and highlighting the inspiring lessons that can be learnt from them. The CHP’s aims are first to capture these stories, second to celebrate the community’s achievements, and third to inspire present and future South Asian, Muslim and Scottish generations. It is a community-led charitable project that has been actively documenting a collection of inspirational stories and personal accounts, uniquely told by the protagonists themselves, describing at first hand their stories and adventures. These range all the way from the time of partition itself to resettling in Pakistan, and then to their final accounts of arriving in Scotland. The video footage enables the public to see their facial expressions, feel their emotions and hear their voices, creating poignant memories of these great men and women, and helping to gain a better understanding of the South Asian and Muslim community’s earliest days in Scotland.


Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


Author(s):  
Tapiwanashe Miranda Sanyanga ◽  
Munyaradzi Sydney Chinzvende ◽  
Tatenda Duncan Kavu ◽  
John Batani

Due to the increase in video content being generated from surveillance cameras and filming, videos analysis becomes imperative. Sometimes it becomes tedious to watch a video captured by a surveillance camera for hours, just to find out the desired footage. Current state of-the-art video analysis methods do not address the problem of searching and localizing a particular object in a video using the name of the object as a query and to return only a segment of the video clip showing the instances of that object. In this research the authors make use of combined implementations from existing work and also applied the dropping frames algorithm to produce a shorter, trimmed video clip showing the target object specified by the search tag. The resulting video is short and specific to the object of interest.


2018 ◽  
Vol 15 (03) ◽  
pp. 1850006 ◽  
Author(s):  
Sandra Costa ◽  
Alberto Brunete ◽  
Byung-Chull Bae ◽  
Nikolaos Mavridis

In order to create effective storytelling agents three fundamental questions must be answered: first, is a physically embodied agent preferable to a virtual agent or a voice-only narration? Second, does a human voice have an advantage over a synthesized voice? Third, how should the emotional trajectory of the different characters in a story be related to a storyteller’s facial expressions during storytelling time, and how does this correlate with the apparent emotions on the faces of the listeners? The results of two specially designed studies indicate that the physically embodied robot produces more narrative attention to the listener as compared to a virtual embodiment, that a human voice is preferable over the current state of the art of text-to-speech, and that there is a complex yet interesting relation between the emotion lines of the story, the facial expressions of the narrating agent, and the emotions of the listener, and that the empathizing of the listener is evident through its facial expressions. This work constitutes an important step towards emotional storytelling robots that can observe their listeners and adapt their style in order to maximize their effectiveness.


2021 ◽  
Vol 3 (2) ◽  
pp. 414-434
Author(s):  
Liangfei Zhang ◽  
Ognjen Arandjelović

Facial expressions provide important information concerning one’s emotional state. Unlike regular facial expressions, microexpressions are particular kinds of small quick facial movements, which generally last only 0.05 to 0.2 s. They reflect individuals’ subjective emotions and real psychological states more accurately than regular expressions which can be acted. However, the small range and short duration of facial movements when microexpressions happen make them challenging to recognize both by humans and machines alike. In the past decade, automatic microexpression recognition has attracted the attention of researchers in psychology, computer science, and security, amongst others. In addition, a number of specialized microexpression databases have been collected and made publicly available. The purpose of this article is to provide a comprehensive overview of the current state of the art automatic facial microexpression recognition work. To be specific, the features and learning methods used in automatic microexpression recognition, the existing microexpression data sets, the major outstanding challenges, and possible future development directions are all discussed.


Author(s):  
R. Oliynik ◽  
S. Tsilyna ◽  
O. Yermolenko

According to the experience of hostilities, including during the JFO (ATO), today much attention is paid by military experts to the development and improvement of optical and optoelectronic devices. This is due to the continuous improvement and development of new generation weapons systems that have improved tactical and technical characteristics, reduces the time spent by objects in the area of detection and damage, reduces the visibility of objects, increases their protection from interference and countermeasures, changes their tactics. The main advantages of optoelectronic devices are: the secrecy of their use, in contrast to radar and radio equipment, they do not require additional systems of protection against interference; relative simplicity of design, operation and small dimensions; low energy consumption; ecological purity. The current state of threats to Ukraine's sovereignty and territorial integrity, first of all the ongoing aggression of the Russian Federation, requires the introduction of necessary ways to counter them, improving approaches to the formation of military-technical policy, taking into account the urgent need to update existing weapons and military (special) equipment. There is a need to create electron-optical transducers or matrix devices of other types that work in the visible and infrared ranges, for night vision devices, and opto-electronic systems for detection (registration) of laser radiation of rangefinders, control systems for homing projectiles, missiles in the optical range spectrum. The paper describes the areas of improvement of optoelectronic means of surveillance, detection and aiming in order to increase the effectiveness of combat employment of armored weapons. The relevance of the study lies in the need to introduce fundamentally new concepts for the integration of optical and optoelectronic devices.


2017 ◽  
Vol 4 (8) ◽  
pp. 1-9 ◽  
Author(s):  
Satoshi Hirata ◽  
Kohki Fuwa ◽  
Masako Myowa

Unlike mirror self-recognition, recognizing one's own image in delayed video footage may indicate the presence of a concept of self that extends across time and space. While humans typically show this ability around 4 years of age, it is unknown whether this capacity is found in non-human animals. In this study, chimpanzees performed a modified version of the mark test to investigate whether chimpanzees could remove stickers placed on the face and head while watching live and delayed video images. The results showed that three of five chimpanzees consistently removed the mark in delayed-viewing conditions, while they removed the stickers much less frequently in control video conditions which lacked a link to their current state. These findings suggest that chimpanzees, like human children at the age of 4 years and more, can comprehend temporal dissociation in their concept of self.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e7623 ◽  
Author(s):  
Linda S. Oña ◽  
Wendy Sandler ◽  
Katja Liebal

Compositionality refers to a structural property of human language, according to which the meaning of a complex expression is a function of the meaning of its parts and the way they are combined. Compositionality is a defining characteristic of all human language, spoken and signed. Comparative research into the emergence of human language aims at identifying precursors to such key features of human language in the communication of other primates. While it is known that chimpanzees, our closest relatives, produce a variety of gestures, facial expressions and vocalizations in interactions with their group members, little is known about how these signals combine simultaneously. Therefore, the aim of the current study is to investigate whether there is evidence for compositional structures in the communication of chimpanzees. We investigated two semi-wild groups of chimpanzees, with focus on their manual gestures and their combinations with facial expressions across different social contexts. If there are compositional structures in chimpanzee communication, adding a facial expression to a gesture should convey a different message than the gesture alone, a difference that we expect to be measurable by the recipient’s response. Furthermore, we expect context-dependent usage of these combinations. Based on a form-based coding procedure of the collected video footage, we identified two frequently used manual gestures (stretched arm gesture and bent arm gesture) and two facial expression (bared teeth face and funneled lip face). We analyzed whether the recipients’ response varied depending on the signaler’s usage of a given gesture + face combination and the context in which these were used. Overall, our results suggest that, in positive contexts, such as play or grooming, specific combinations had an impact on the likelihood of the occurrence of particular responses. Specifically, adding a bared teeth face to a gesture either increased the likelihood of affiliative behavior (for stretched arm gesture) or eliminated the bias toward an affiliative response (for bent arm gesture). We show for the first time that the components under study are recombinable, and that different combinations elicit different responses, a property that we refer to as componentiality. Yet our data do not suggest that the components have consistent meanings in each combination—a defining property of compositionality. We propose that the componentiality exhibited in this study represents a necessary stepping stone toward a fully evolved compositional system.


Author(s):  
Vali Engalychev ◽  
Elena Leonova ◽  
Aleksey Havylo

The objective of this study is to develop and experimentally verify the expert method of non-contact psychological and physiological diagnostics of emotional and mental state of a person by measuring complex patterns of facial micro-movements beyond the conscious control of the person through the use of special software, as well as to determine the opportunities and limitations of using this method in forensic psychological examinations. The authors substantiate the prospects of unbiased methods for assessing a person’s emotional and mental state by measuring behavioural patterns using modern digital technologies. The paper identifies the key factors that determine the effectiveness of forensic psychological examination methods. The results of the study revealed both general regularities to distinguish patterns and their individual characteristics. However, there were no consistently repeated facial patterns that would allow differentiating the initial stimuli of emotional reactions. It was established that the essential factor subject to the evaluation through the analysis of facial expressions is the level of cognitive load, representing the weight of the situation for a certain person. By applying machine-learning method, the authors developed a technology of the binary classification of questions according to the degree of their subjective cognitive complexity based on the facial micro-­movements when a person answers the interview questions. The predictive model served as the basis for the development of a pilot version of the expert study method to assess the subjective cognitive complexity of the interview questions. The paper provides minimum technical requirements of “Systems of Psychological and Physiolo­gical Studies” software for the analysis of video footage in forensic psychological examination, including standardized requirements for recording an examined person’s facial expressions during the interview. The authors also developed the sequence of the expert’s actions. The paper includes the legal rationale for a new expert study method to be implemented.


Author(s):  
G.D. Danilatos

Over recent years a new type of electron microscope - the environmental scanning electron microscope (ESEM) - has been developed for the examination of specimen surfaces in the presence of gases. A detailed series of reports on the system has appeared elsewhere. A review summary of the current state and potential of the system is presented here.The gas composition, temperature and pressure can be varied in the specimen chamber of the ESEM. With air, the pressure can be up to one atmosphere (about 1000 mbar). Environments with fully saturated water vapor only at room temperature (20-30 mbar) can be easily maintained whilst liquid water or other solutions, together with uncoated specimens, can be imaged routinely during various applications.


Sign in / Sign up

Export Citation Format

Share Document