Automatic Summarization Method for First-Person-View Video Based on Object Gaze Time

Author(s):  
Keita Hamaoka ◽  
Yasuyuki Kono
Author(s):  
Vladimir S. Simankov ◽  
Demid M. Tolkachev

The article deals with the relevant problem of automatic arbitrary text summary obtaining. Several works in this field are analyzed using comparison and classification methods. The problem of obtaining a text summary by means of an answer to a random question is emphasized. The identification of semantic relations between sentences using a set of rules based on syntax and semantics of a language is described. These rules are represented in the form of regular expressions – patterns that consist of characters and metacharacters and set search rules. Taking into account semantic coherence features, an improved method of sentences similarity calculation to identify the measure of inclusion of one sentence into another one is developed. This method helps to define more precisely logical stress on the words within automatic summarization and detect contradictions. A modified automatic summarization method, oriented at a specific problem is suggested. It is concluded that the proposed method is quiet effective in the process of automatic search for answers to questions in the Internet.


2011 ◽  
Vol 271-273 ◽  
pp. 154-157
Author(s):  
Xin Lai Tang ◽  
Xiao Rong Wang

This paper proposes a special Chinese automatic summarization method based on Concept-Obtained and Improved K-means Algorithm. The idea of our approach is to obtain concepts of words based on HowNet, and use concept as feature, instead of word. We use conceptual vector space model and Improved K-means Algorithm to form a summarization. Experimental results indicate a clear superiority of the proposed method over the traditional method under the proposed evaluation scheme.


ASHA Leader ◽  
2014 ◽  
Vol 19 (1) ◽  
pp. 72-72
Author(s):  
Kelli Jeffries Owens
Keyword(s):  

2018 ◽  
Vol 23 (3) ◽  
pp. 189-205 ◽  
Author(s):  
Renatus Ziegler ◽  
Ulrich Weger

Abstract. In psychology, thinking is typically studied in terms of a range of behavioral or physiological parameters, focusing, for instance, on the mental contents or the neuronal correlates of the thinking process proper. In the current article, by contrast, we seek to complement this approach with an exploration into the experiential or inner dimensions of thinking. These are subtle and elusive and hence easily escape a mode of inquiry that focuses on externally measurable outcomes. We illustrate how a sufficiently trained introspective approach can become a radar for facets of thinking that have found hardly any recognition in the literature so far. We consider this an important complement to third-person research because these introspective observations not only allow for new insights into the nature of thinking proper but also cast other psychological phenomena in a new light, for instance, attention and the self. We outline and discuss our findings and also present a roadmap for the reader interested in studying these phenomena in detail.


Author(s):  
Matthias Hofer

Abstract. This was a study on the perceived enjoyment of different movie genres. In an online experiment, 176 students were randomly divided into two groups (n = 88) and asked to estimate how much they, their closest friends, and young people in general enjoyed either serious or light-hearted movies. These self–other differences in perceived enjoyment of serious or light-hearted movies were also assessed as a function of differing individual motivations underlying entertainment media consumption. The results showed a clear third-person effect for light-hearted movies and a first-person effect for serious movies. The third-person effect for light-hearted movies was moderated by level of hedonic motivation, as participants with high hedonic motivations did not perceive their own and others’ enjoyment of light-hearted films differently. However, eudaimonic motivations did not moderate first-person perceptions in the case of serious films.


Sign in / Sign up

Export Citation Format

Share Document