Putting names to faces

2000 ◽  
Vol 8 (1) ◽  
pp. 9-62 ◽  
Author(s):  
Derek R. Carson ◽  
A. Mike Burton ◽  
Vicki Bruce

It is well established that retrieval of names is harder than the retrieval of other identity specific information. This paper offers a review of the more influential accounts put forward as explanations of why names are so difficult to retrieve. A series of five experiments tests a number of these accounts. Experiments One to Three examine the claims that names are hard to recall because they are typically meaningless (Cohen 1990), or unique (Burton and Bruce 1992; Brédart, Valentine, Calder, and Gassi 1995). Participants are shown photographs of unfamiliar people (Experiments One and Two) or familiar people (Experiment Three) and given three pieces of information about each: a name, a unique piece of information, and a shared piece of information. Learning follows an incidental procedure, and participants are given a surprise recall test. In each experiment shared information is recalled most often, followed by unique information, followed by name. Experiment Four tests both the ‘uniqueness’ account and an account based on the specificity of the naming response (Brédart 1993). Participants are presented with famous faces and asked to categorise them by semantic group (occupation). Results indicate that less time is needed to perform this task when the group is a subset of a larger semantic category. A final experiment examines the claim that names might take longer to access because they are less often retrieved than other classes of information. Latencies show that participants remain more efficient when categorising faces by their occupation than by their name even when they have received extra practice of naming the faces. We conclude that the explanation best able to account for the data is that names are stored separately from other semantic information and can only be accessed after other identity specific information has been retrieved. However, we also argue that the demands we make of these explanations make it likely that no single theory will be able to account for all existing data.

2018 ◽  
Vol 9 (4) ◽  
pp. 675
Author(s):  
Li Min Chen

The study investigates whether semantic-related word root sets, such as -graph- & -scrib-, meaning to write, assist learning and analysis of morphological complex academic words in the EFL middle high setting. Two intact classes of 88 EFL learners (L1: Mandarin) were treated with two varied word lists grouped under semantic-related word root sets vs. alphabetical-ordered ones individually. Learning gains were measured on two levels of sensitivity, including two form recognition tests (target words and new words) and one form recall test. Although the effect of semantic-related word root sets seems negative on the form recall test, semantic-related word root sets may assist learners with the form recognition of new words. The study provides specific information to researchers, education practitioners and publishers fascinated with form-focused morphological awareness vocabulary instruction.


Author(s):  
Sunny Verma ◽  
Chen Wang ◽  
Liming Zhu ◽  
Wei Liu

Multimodal sentiment analysis combines information available from visual, textual, and acoustic representations for sentiment prediction. The recent multimodal fusion schemes combine multiple modalities as a tensor and obtain either; the common information by utilizing neural networks, or the unique information by modeling low-rank representation of the tensor. However, both of these information are essential as they render inter-modal and intra-modal relationships of the data. In this research, we first propose a novel deep architecture to extract the common information from the multi-mode representations. Furthermore, we propose unique networks to obtain the modality-specific information that enhances the generalization performance of our multimodal system. Finally, we integrate these two aspects of information via a fusion layer and propose a novel multimodal data fusion architecture, which we call DeepCU (Deep network with both Common and Unique latent information). The proposed DeepCU consolidates the two networks for joint utilization and discovery of all-important latent information. Comprehensive experiments are conducted to demonstrate the effectiveness of utilizing both common and unique information discovered by DeepCU on multiple real-world datasets. The source code of proposed DeepCU is available at https://github.com/sverma88/DeepCU-IJCAI19.


Author(s):  
Lin Lan ◽  
Zhenguo Li ◽  
Xiaohong Guan ◽  
Pinghui Wang

Despite significant progress, deep reinforcement learning (RL) suffers from data-inefficiency and limited generalization. Recent efforts apply meta-learning to learn a meta-learner from a set of RL tasks such that a novel but related task could be solved quickly. Though specific in some ways, different tasks in meta-RL are generally similar at a high level. However, most meta-RL methods do not explicitly and adequately model the specific and shared information among different tasks, which limits their ability to learn training tasks and to generalize to novel tasks. In this paper, we propose to capture the shared information on the one hand and meta-learn how to quickly abstract the specific information about a task on the other hand. Methodologically, we train an SGD meta-learner to quickly optimize a task encoder for each task, which generates a task embedding based on past experience. Meanwhile, we learn a policy which is shared across all tasks and conditioned on task embeddings. Empirical results on four simulated tasks demonstrate that our method has better learning capacity on both training and novel tasks and attains up to 3 to 4 times higher returns compared to baselines.


Author(s):  
Xuan Wu ◽  
Qing-Guo Chen ◽  
Yao Hu ◽  
Dengbao Wang ◽  
Xiaodong Chang ◽  
...  

Multi-view multi-label learning serves an important framework to learn from objects with diverse representations and rich semantics. Existing multi-view multi-label learning techniques focus on exploiting shared subspace for fusing multi-view representations, where helpful view-specific information for discriminative modeling is usually ignored. In this paper, a novel multi-view multi-label learning approach named SIMM is proposed which leverages shared subspace exploitation and view-specific information extraction. For shared subspace exploitation, SIMM jointly minimizes confusion adversarial loss and multi-label loss to utilize shared information from all views. For view-specific information extraction, SIMM enforces an orthogonal constraint w.r.t. the shared subspace to utilize view-specific discriminative information. Extensive experiments on real-world data sets clearly show the favorable performance of SIMM against other state-of-the-art multi-view multi-label learning approaches.


Author(s):  
Muhammad Asif Ali ◽  
Yifang Sun ◽  
Xiaoling Zhou ◽  
Wei Wang ◽  
Xiang Zhao

Distinguishing antonyms from synonyms is a key challenge for many NLP applications focused on the lexical-semantic relation extraction. Existing solutions relying on large-scale corpora yield low performance because of huge contextual overlap of antonym and synonym pairs. We propose a novel approach entirely based on pre-trained embeddings. We hypothesize that the pre-trained embeddings comprehend a blend of lexical-semantic information and we may distill the task-specific information using Distiller, a model proposed in this paper. Later, a classifier is trained based on features constructed from the distilled sub-spaces along with some word level features to distinguish antonyms from synonyms. Experimental results show that the proposed model outperforms existing research on antonym synonym distinction in both speed and performance.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Anna C. Schapiro ◽  
Elizabeth A. McDevitt ◽  
Lang Chen ◽  
Kenneth A. Norman ◽  
Sara C. Mednick ◽  
...  

2013 ◽  
Vol 56 (5) ◽  
pp. 1567-1578 ◽  
Author(s):  
Alison Behrman ◽  
Ali Akhund

Purpose In this article, the authors examine (a) the effect of semantic context on accentedness, comprehensibility, and intelligibility of Spanish-accented American English (AE) as judged by monolingual AE listeners and (b) the interaction of semantic context and accentedness on comprehensibility and intelligibility. Method Twenty adult native (L1) Spanish speakers proficient in AE and 4 L1 AE speakers (controls) read 48 statements consisting of true–false, semantically meaningful, and semantically anomalous sentences. Eighty monolingual AE listeners assessed accentedness, comprehensibility, and intelligibility of the statements. Results A significant main effect was found for semantic category on all 3 dependent variables. Accents were perceived to be stronger, and both comprehensibility and intelligibility were worse, in semantically anomalous contexts. Speaker data were grouped into strong, mid-level, and mild accents. The interaction between semantic category and accent was significant for both comprehensibility and intelligibility. The effect of semantic context was strongest for strong accents. Intelligibility was excellent for speakers with mid-level accents in true–false and semantically meaningful contexts, and it was excellent for mild accents in all contexts. Conclusions Listeners access semantic information, in addition to phonetic and phonotactic features, in the perception of nonnative speech. Both accent level and semantic context are important in research on foreign-accented speech.


Author(s):  
Catharine DeLong ◽  
Christina Nessler ◽  
Sandra Wright ◽  
Julie Wambaugh

Purpose The purpose of this investigation was to systematically examine outcomes associated with Semantic feature analysis, which is an established treatment for word-retrieval deficits in aphasia. Attributes of the experimental design and stimuli were manipulated to evaluate generalized naming of semantically related and unrelated items. In addition, the study was designed to examine changes in production of semantic information. Method Semantic feature analysis was applied in the context of multiple-baseline designs with 5 persons with chronic aphasia. Experimental items were controlled for semantic category membership, number of naming attempts, and provision of item names. Acquisition, generalization, and maintenance effects were measured in probes of naming performance. Production of semantic information was also measured in response to experimental items and in discourse tasks. Results Treatment was associated with systematic increases in naming of trained items for 4 of the 5 participants. Positive generalization to untrained exemplars of trained categories was found for repeatedly exposed items but not for limited-exposure items. Slight increases in production of semantic content were observed. Conclusion Repeated attempts to name untreated items appeared to play a role in generalization. Provision of the names of untrained items may have enhanced generalized responding for 2 participants.


2010 ◽  
Vol 148-149 ◽  
pp. 558-562
Author(s):  
Xiao Wei Wang ◽  
Jian Feng Li ◽  
Jian Zhi Li ◽  
Fang Yi Li

Aimed at few reflections on the scenario characteristics in the most methods of LCA, this paper presents a new approach taking specific information of scenario into account in the environmental impact of product. By analyzing the relation between scenario and environmental impact, the attributes of space, time and person are extracted as the most basic characteristics. In order to avoid the deficient of science modeling between the scenario attributes and the environmental impact processes and to use conveniently, the concept of scenario characteristic coefficient is proposed and the three types of coefficients are expressed in detail applying the existing data or statistic. The method of LCIA considering scenario characteristics is presented by integrating the characteristic coefficients into the LCI processes. And this method is applied to study the scenario characteristics of an LCA of electromotor.


2020 ◽  
Author(s):  
Anna K. Kuhlen ◽  
Rasha Abdel Rahman

AbstractThis study investigates in a joint action setting a well-established effect in speech production, cumulative semantic interference, an increase in naming latencies when naming a series of semantically related pictures. In a joint action setting, two task partners take turns naming pictures. Previous work in this setting demonstrated that naming latencies increase not only with each semantically related picture speakers named themselves, but also with each picture named by the partner (Hoedemaker, Ernst, Meyer, & Belke, 2017; Kuhlen & Abdel Rahman, 2017). This suggests that speakers pursue lexical access on behalf of their partner. In two electrophysiological experiments (N=30 each) we investigated the neuro-cognitive signatures of such simulated lexical access. As expected, in both experiments speakers’ naming latency increased with successive naming instances within a given semantic category. Correspondingly, speakers’ EEG showed an increasing posterior positivity between 250-400ms, an ERP modulation typically associated with lexical access. However, unlike previous experiments, speakers were not influenced by their partner’s picture naming. Accordingly, we found no electrophysiological evidence of lexical access on behalf of the partner. We conclude that speakers do not always represent their partner’s naming response and discuss possible factors that may have limited the participants’ evaluation of the task as a joint action.


Sign in / Sign up

Export Citation Format

Share Document