scholarly journals Perceiving ensemble statistics of novel image sets

Author(s):  
Noam Khayat ◽  
Stefano Fusi ◽  
Shaul Hochstein

AbstractPerception, representation, and memory of ensemble statistics has attracted growing interest. Studies found that, at different abstraction levels, the brain represents similar items as unified percepts. We found that global ensemble perception is automatic and unconscious, affecting later perceptual judgments regarding individual member items. Implicit effects of set mean and range for low-level feature ensembles (size, orientation, brightness) were replicated for high-level category objects. This similarity suggests that analogous mechanisms underlie these extreme levels of abstraction. Here, we bridge the span between visual features and semantic object categories using the identical implicit perception experimental paradigm for intermediate novel visual-shape categories, constructing ensemble exemplars by introducing systematic variations of a central category base or ancestor. In five experiments, with different item variability, we test automatic representation of ensemble category characteristics and its effect on a subsequent memory task. Results show that observer representation of ensembles includes the group’s central shape, category ancestor (progenitor), or group mean. Observers also easily reject memory of shapes belonging to different categories, i.e. originating from different ancestors. We conclude that complex categories, like simple visual form ensembles, are represented in terms of statistics including a central object, as well as category boundaries. We refer to the model proposed by Benna and Fusi (bioRxiv 624239, 2019) that memory representation is compressed when related elements are represented by identifying their ancestor and each one’s difference from it. We suggest that ensemble mean perception, like category prototype extraction, might reflect employment at different representation levels of an essential, general representation mechanism.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5136
Author(s):  
Bassem Ouni ◽  
Christophe Aussagues ◽  
Saadia Dhouib ◽  
Chokri Mraidha

Sensor-based digital systems for Instrumentation and Control (I&C) of nuclear reactors are quite complex in terms of architecture and functionalities. A high-level framework is highly required to pre-evaluate the system’s performance, check the consistency between different levels of abstraction and address the concerns of various stakeholders. In this work, we integrate the development process of I&C systems and the involvement of stakeholders within a model-driven methodology. The proposed approach introduces a new architectural framework that defines various concepts, allowing system implementations and encompassing different development phases, all actors, and system concerns. In addition, we define a new I&C Modeling Language (ICML) and a set of methodological rules needed to build different architectural framework views. To illustrate this methodology, we extend the specific use of an open-source system engineering tool, named Eclipse Papyrus, to carry out many automation and verification steps at different levels of abstraction. The architectural framework modeling capabilities will be validated using a realistic use case system for the protection of nuclear reactors. The proposed framework is able to reduce the overall system development cost by improving links between different specification tasks and providing a high abstraction level of system components.


2019 ◽  
Author(s):  
Lore Goetschalckx ◽  
Johan Wagemans

This is a preprint. Please find the published, peer reviewed version of the paper here: https://peerj.com/articles/8169/. Images differ in their memorability in consistent ways across observers. What makes an image memorable is not fully understood to date. Most of the current insight is in terms of high-level semantic aspects, related to the content. However, research still shows consistent differences within semantic categories, suggesting a role for factors at other levels of processing in the visual hierarchy. To aid investigations into this role as well as contributions to the understanding of image memorability more generally, we present MemCat. MemCat is a category-based image set, consisting of 10K images representing five broader, memorability-relevant categories (animal, food, landscape, sports, and vehicle) and further divided into subcategories (e.g., bear). They were sampled from existing source image sets that offer bounding box annotations or more detailed segmentation masks. We collected memorability scores for all 10K images, each score based on the responses of on average 99 participants in a repeat-detection memory task. Replicating previous research, the collected memorability scores show high levels of consistency across observers. Currently, MemCat is the second largest memorability image set and the largest offering a category-based structure. MemCat can be used to study the factors underlying the variability in image memorability, including the variability within semantic categories. In addition, it offers a new benchmark dataset for the automatic prediction of memorability scores (e.g., with convolutional neural networks). Finally, MemCat allows to study neural and behavioral correlates of memorability while controlling for semantic category.


Author(s):  
Rosalie J. Ocker

A series of experiments investigated creativity and quality of work-product solutions in virtual teams (Ocker, forthcoming; Ocker, 2005; Ocker & Fjermestad, 1998; Ocker et al., 1998; 1996). Across experiments, small teams with about five graduate students interacted for approximately two weeks to determine the high-level requirements and design for a computerized post office (Goel, 1989; Olson et al., 1993). The means of interaction was manipulated in these experiments such that teams interacted via one of the following treatments: (1) asynchronous computer-medicated communication (CMC), (2) synchronous CMC, (3) asynchronous CMC interspersed with face-to-face (FtF) meetings, or (4) a series of traditional FtF meetings without any electronic communication. A repeated finding across experiments was that teams interacting only using asynchronous CMC – that is, teams without any FtF or synchronous communication -- produced significantly more creative results than teams in the other treatments. Additionally, asynchronous virtual teams rated high in creativity were generally not the same teams that were judged high in terms of the quality of their deliverable. To further examine these findings, this chapter presents results of an exploratory study designed to investigate the impact of individual personality facets on team outcomes. The objective of this study is to determine whether differences in team outcomes – in terms of the level of creativity versus the quality of the team deliverable – can be predicted by individual member personality.no abstract


Author(s):  
Xiayu Chen ◽  
Ming Zhou ◽  
Zhengxin Gong ◽  
Wei Xu ◽  
Xingyu Liu ◽  
...  

Deep neural networks (DNNs) have attained human-level performance on dozens of challenging tasks via an end-to-end deep learning strategy. Deep learning allows data representations that have multiple levels of abstraction; however, it does not explicitly provide any insights into the internal operations of DNNs. Deep learning's success is appealing to neuroscientists not only as a method for applying DNNs to model biological neural systems but also as a means of adopting concepts and methods from cognitive neuroscience to understand the internal representations of DNNs. Although general deep learning frameworks, such as PyTorch and TensorFlow, could be used to allow such cross-disciplinary investigations, the use of these frameworks typically requires high-level programming expertise and comprehensive mathematical knowledge. A toolbox specifically designed as a mechanism for cognitive neuroscientists to map both DNNs and brains is urgently needed. Here, we present DNNBrain, a Python-based toolbox designed for exploring the internal representations of DNNs as well as brains. Through the integration of DNN software packages and well-established brain imaging tools, DNNBrain provides application programming and command line interfaces for a variety of research scenarios. These include extracting DNN activation, probing and visualizing DNN representations, and mapping DNN representations onto the brain. We expect that our toolbox will accelerate scientific research by both applying DNNs to model biological neural systems and utilizing paradigms of cognitive neuroscience to unveil the black box of DNNs.


2006 ◽  
Vol 15 (03) ◽  
pp. 391-413 ◽  
Author(s):  
ASIT DAN ◽  
KAVITHA RANGANATHAN ◽  
CATALIN L. DUMITRESCU ◽  
MATEI RIPEANU

In large-scale, distributed systems such as Grids, an agreement between a client and a service provider specifies service level objectives both as expressions of client requirements and as provider assurances. From an application perspective, these objectives should be expressed in a high-level, service or application-specific manner rather than requiring clients to detail the necessary resources. Resource providers on the other hand, expect low-level, resource-specific performance criteria that are uniform across applications and can be easily interpreted and provisioned. This paper presents a framework for service management that addresses this gap between high-level specification of client performance objectives and existing resource management infrastructures. The paper identifies three levels of abstraction for resource requirements a service provider needs to manage, namely: detailed specification of raw resources, virtualization of heterogeneous resources as abstract resources, and performance objectives at an application level. The paper also identifies three key functions for managing service-level agreements, namely: translation of resource requirements across abstraction layers, arbitration in allocating resources to client requests, and aggregation and allocation of resources from multiple lower-level resource managers. One or more of these key functions may be present at each abstraction layer of a service-level manager. Thus, layering and the composition of these functions across abstraction layers enables modeling of a wide array of management scenarios. The framework we present uses service metadata and/or service performance models to map client requirements to resource capabilities, uses business value associated with objectives to arbitrate between competing requests, and allocates resources based on previously negotiated agreements. We instantiate this framework for three different scenarios and explain how the architectural principles we introduce are used in the real-word.


1992 ◽  
Vol 1 (2) ◽  
pp. 185-203 ◽  
Author(s):  
Peter Jacobson ◽  
Bo Kågström ◽  
Mikael Rännar

CONLAB (CONcurrent LABoratory) is an environment for developing algorithms for parallel computer architectures and for simulating different parallel architectures. A user can experimentally verify and obtain a picture of the real performance of a parallel algorithm executing on a simulated target architecture. CONLAB gives a high-level support for expressing computations and communications in a distributed memory multicomputer (DMM) environment. A development methodology for DMM algorithms that is based on different levels of abstraction of the problem, the target architecture, and the CONLAB language itself is presented and illustrated with two examples. Simulotion results for and real experiments on the Intel iPSC/2 hypercube are presented. Because CONLAB is developed to run on uniprocessor UNIX workstations, it is an educational tool that offers interactive (simulated) parallel computing to a wide audience.


2020 ◽  
Author(s):  
Aspen H. Yoo ◽  
Luigi Acerbi ◽  
Wei ji Ma

1AbstractWhat are the contents of working memory? In both behavioral and neural computational models, the working memory representation of a stimulus is typically described by a single number, namely a point estimate of that stimulus. Here, we asked if people also maintain the uncertainty associated with a memory, and if people use this uncertainty in subsequent decisions. We collected data in a two-condition orientation change detection task; while both conditions measured whether people used memory uncertainty, only one required maintaining it. For each condition, we compared an optimal Bayesian observer model, in which the observer uses an accurate representation of uncertainty in their decision, to one in which the observer does not. We find that this “Use Uncertainty” model fits better for all participants in both conditions. In the first condition, this result suggests that people use uncertainty optimally in a working memory task when that uncertainty information is available at the time of decision, confirming earlier results. Critically, the results of the second condition suggest that this uncertainty information was maintained in working memory. We test model variants and find that our conclusions do not depend on our assumptions about the observer’s encoding process, inference process, or decision rule. Our results provide evidence that people have uncertainty that reflects their memory precision at an item-specific level, maintain this information over a working memory delay, and use it implicitly in a way consistent with an optimal observer. These results challenge existing computational models of working memory to update their frameworks to represent uncertainty.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e8169 ◽  
Author(s):  
Lore Goetschalckx ◽  
Johan Wagemans

Images differ in their memorability in consistent ways across observers. What makes an image memorable is not fully understood to date. Most of the current insight is in terms of high-level semantic aspects, related to the content. However, research still shows consistent differences within semantic categories, suggesting a role for factors at other levels of processing in the visual hierarchy. To aid investigations into this role as well as contributions to the understanding of image memorability more generally, we present MemCat. MemCat is a category-based image set, consisting of 10K images representing five broader, memorability-relevant categories (animal, food, landscape, sports, and vehicle) and further divided into subcategories (e.g., bear). They were sampled from existing source image sets that offer bounding box annotations or more detailed segmentation masks. We collected memorability scores for all 10 K images, each score based on the responses of on average 99 participants in a repeat-detection memory task. Replicating previous research, the collected memorability scores show high levels of consistency across observers. Currently, MemCat is the second largest memorability image set and the largest offering a category-based structure. MemCat can be used to study the factors underlying the variability in image memorability, including the variability within semantic categories. In addition, it offers a new benchmark dataset for the automatic prediction of memorability scores (e.g., with convolutional neural networks). Finally, MemCat allows the study of neural and behavioral correlates of memorability while controlling for semantic category.


2015 ◽  
Vol 6 (2) ◽  
pp. 29-58 ◽  
Author(s):  
Vesa Kuikka ◽  
Juha-Pekka Nikkarila ◽  
Marko Suojanen

Abstract Our goal is to get better understanding of different kind of dependencies behind the high-level capability areas. The models are suitable for investigating present state capabilities or future developments of capabilities in the context of technology forecasting. Three levels are necessary for a model describing effects of technologies on military capabilities. These levels are capability areas, systems and technologies. The contribution of this paper is to present one possible model for interdependencies between technologies. Modelling interdependencies between technologies is the last building block in constructing a quantitative model for technological forecasting including necessary levels of abstraction. This study supplements our previous research and as a result we present a model for the whole process of capability modelling. As in our earlier studies, capability is defined as the probability of a successful task or operation or proper functioning of a system. In order to obtain numerical data to demonstrate our model, we conducted a questionnaire to a group of defence technology researchers where interdependencies between seven representative technologies were inquired. Because of a small number of participants in questionnaires and general uncertainties concerning subjective evaluations, only rough conclusions can be made from the numerical results.


2021 ◽  
Author(s):  
Benjamin Goecke ◽  
Klaus Oberauer

In tests of working memory with verbal or spatial materials repeating the same memory sets across trials leads to improved memory performance. This well-established “Hebb repetition effect” could not be shown for visual materials. This absence of the Hebb effect can be explained in two ways: Either persons fail to acquire a long-term memory representation of the repeated memory sets, or they acquire such long-term memory representations, but fail to use them during the working memory task. In two experiments, (N1 = 18 and N2 = 30), we aimed to decide between these two possibilities by manipulating the long-term memory knowledge of some of the memory sets used in a change-detection task. Before the change-detection test, participants learned three arrays of colors to criterion. The subsequent change-detection test contained both previously learned and new color arrays. Change detection performance was better on previously learned compared to new arrays, showing that long-term memory is used in change detection.


Sign in / Sign up

Export Citation Format

Share Document