Cross-Domain Recommendation Method Based On Multi-Layer Graph Analysis With Visual Information

Author(s):  
Taisei Hirakawa ◽  
Keisuke Maeda ◽  
Takahiro Ogawa ◽  
Satoshi Asamizu ◽  
Miki Haseyama
2003 ◽  
Vol 15 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Ela I. Olivares ◽  
Jaime Iglesias ◽  
Socorro Rodríguez-Holguín

N400 brain event-related potential (ERP) is a mismatch negativity originally found in response to semantic incongruences of a linguistic nature and is used paradigmatically to investigate memory organization in various domains of information, including that of faces. In the present study, we analyzed different mismatch negativities evoked in N400-like paradigms related to recognition of newly learned faces with or without associated verbal information. ERPs were compared in the following conditions: (1) mismatching features (eyes-eyebrows) using a facial context corresponding to the faces learned without associated verbal information (“pure” intradomain facial processing); (2) mismatching features using a facial context corresponding to the faces learned with associated occupations and proper names (“nonpure” intradomain facial processing); (3) mismatching occupations using a facial context (cross-domain processing); and (4) mismatching names using an occupation context (intra-domain verbal processing). Results revealed that mismatching stimuli in the four conditions elicited a mismatch negativity analogous to N400 but with different timing and topo-graphical patterns. The onset of the mismatch negativity occurred earliest in Conditions 1 and 2, followed by Condition 4, and latest in Condition 3. The negativity had the shortest duration in Task 1 and the longest duration in Task 3. Bilateral parietal activity was confirmed in all conditions, in addition to a predominant right posterior temporal localization in Condition 1, a predominant right frontal localization in Condition 2, an occipital localization in Condition 3, and a more widely distributed (although with posterior predominance) localization in Condition 4. These results support the existence of multiple N400, and particularly of a nonlinguistic N400 related to purely visual information, which can be evoked by facial structure processing in the absence of verbal-semantic information.


Author(s):  
Taisei Hirakawa ◽  
Keisuke Maeda ◽  
Takahiro Ogawa ◽  
Satoshi Asamizu ◽  
Miki Haseyama
Keyword(s):  

Author(s):  
Taisei Hirakawa ◽  
Keisuke Maeda ◽  
Takahiro Ogawa ◽  
Satoshi Asamizu ◽  
Miki Haseyama
Keyword(s):  

2012 ◽  
Vol 616-618 ◽  
pp. 2171-2174
Author(s):  
Yan Qiu Hua

Visual concept detection is a functional way to detect, manage and classify visual information. No matter it is image retrieval or video retrieval, the system can distinguish object and scene concepts by learning image descriptions itself. However, when the viewing conditions is different or the viewing points is changed, all these will lead to a various result of the image description which collect by the system. So an effective visual concept classification methods should be invariant to different and accidental recording circumstances or conditions. This paper has analysis and summarize the current research status and detection methods of Visual concept detection technologies. Including such methods: salient point detection; social tagged images as a training resource for automated concept detection; extracting distinctive invariant features from images; cross-domain kernel learning method for visual concept detection, etc. And in the end, this paper seeks to unravel the effective when using MIR Flickr in visual concept detection. After comparasion, we want to find the advantages and disadvantages of these typical methods, and hope to give a valuable reference to the following researchers who is interested in visual concept detection and classifications.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document