visual concept
Recently Published Documents


TOTAL DOCUMENTS

164
(FIVE YEARS 24)

H-INDEX

19
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Laurent Caplette ◽  
Nicholas Turk-Browne

Revealing the contents of mental representations is a longstanding goal of cognitive science. However, there is currently no general framework for providing direct access to representations of high-level visual concepts. We asked participants to indicate what they perceived in images synthesized from random visual features in a deep neural network. We then inferred a mapping between the semantic features of their responses and the visual features of the images. This allowed us to reconstruct the mental representation of virtually any common visual concept, both those reported and others extrapolated from the same semantic space. We successfully validated 270 of these reconstructions as containing the target concept in a separate group of participants. The visual-semantic mapping uncovered with our method further generalized to new stimuli, participants, and tasks. Finally, it allowed us to reveal how the representations of individual observers differ from each other and from those of neural networks.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Junlong Feng ◽  
Jianping Zhao

Recent image captioning models based on the encoder-decoder framework have achieved remarkable success in humanlike sentence generation. However, an explicit separation between encoder and decoder brings out a disconnection between the image and sentence. It usually leads to a rough image description: the generated caption only contains main instances but neglects additional objects and scenes unexpectedly, which reduces the caption consistency of the image. To address this issue, we proposed an image captioning system within context-fused guidance in this paper. It incorporates regional and global image representation as the compositional visual features to learn the objects and attributes in images. To integrate image-level semantic information, the visual concept is employed. To avoid misleading decoding, a context fusion gate is introduced to calculate the textual context by selectively aggregating the information of visual concept and word embedding. Subsequently, the context-fused image guidance is formulated based on the compositional visual features and textual context. It provides the decoder with informative semantic knowledge. Finally, a captioner with a two-layer LSTM architecture is constructed to generate captions. Moreover, to overcome the exposure bias, we train the proposed model through sequence decision-making. The experiments conducted on the MS COCO dataset show the outstanding performance of our work. The linguistic analysis demonstrates that our model improves the caption consistency of the image.


Author(s):  
Saurabh Varshneya ◽  
Antoine Ledent ◽  
Robert A. Vandermeulen ◽  
Yunwen Lei ◽  
Matthias Enders ◽  
...  

We propose a novel training methodology---Concept Group Learning (CGL)---that encourages training of interpretable CNN filters by partitioning filters in each layer into \emph{concept groups}, each of which is trained to learn a single visual concept. We achieve this through a novel regularization strategy that forces filters in the same group to be active in similar image regions for a given layer. We additionally use a regularizer to encourage a sparse weighting of the concept groups in each layer so that a few concept groups can have greater importance than others. We quantitatively evaluate CGL's model interpretability using standard interpretability evaluation techniques and find that our method increases interpretability scores in most cases. Qualitatively we compare the image regions which are most active under filters learned using CGL versus filters learned without CGL and find that CGL activation regions more strongly concentrate around semantically relevant features.


2021 ◽  
Vol 2 (1) ◽  
pp. 40-45
Author(s):  
Adrian Pratama ◽  
Alfiansyah Zulkarnain

This paper contains the process of designing a visual concept of assimilation of traditional clothing designs with science fiction from the adaptation of Tere Liye's novel Bumi using the sequence of Armand Serrano's processes in making concept art. This visual concept design process aims to create a visual concept that is in accordance with the content and context of the story and can be used as a reference in creating a visual appearance that is coherent with the source of the adaptation. The study stage began by analyzing the content of the literature that had been collected using McCloud's Backstory method continued with context analysis using Charles Sanders Peirce's literature and semiotics studies and finally searching and developing keywords using the Words Association Network method.


2021 ◽  
Author(s):  
Bria Long ◽  
Judith Fan ◽  
Renata Chai ◽  
Michael C. Frank

To what extent do visual concepts of dogs, cars, and clocks change across childhood? We hypothesized that as children progressively learn which features best distinguish visual concepts from one another, they also improve their ability to connect this knowledge with external representations. To examine this possibility, we investigated developmental changes in children's ability to produce and recognize drawings of common object categories. First, we recruited children aged 2-10 years to produce drawings of 48 categories via a free-standing kiosk in a children's museum, and we measured how recognizable these >37K drawings were using a deep convolutional neural network model of object recognition. Second, we recruited other children across the same age range to identify the drawn category in a subset of these drawings via "guessing games" at the same kiosk.We found consistent developmental gains in both children's ability to include diagnostic visual features in their drawings and in children's ability to use these features when recognizing other children's drawings. Our results suggest that children's ability to connect internal and external representations of visual concepts improves gradually across childhood and imply that developmental trajectories of visual concept learning may be more protracted than previously thought.


Sign in / Sign up

Export Citation Format

Share Document