scholarly journals Domain-Specific Storage/Processing and Domain-General Cross-Domain Processing in Working Memory

2009 ◽  
Vol 21 (1) ◽  
pp. 1-21 ◽  
Author(s):  
Hwia Park ◽  
Hyung-Chul Li
2014 ◽  
Vol 47 (2) ◽  
pp. 174-190 ◽  
Author(s):  
Zhisheng Wen

Working memory (WM) generally refers to the human ability to temporarily maintain and manipulate a limited amount of information in immediate consciousness when carrying out complex cognitive tasks such as problem-solving and language comprehension. Though much controversy has surrounded the WM concept since its inception by Baddeley & Hitch (1974), an increasing number of cognitive psychologists have accepted WM as a multi-component system comprising both domain-specific storage mechanisms and domain-general executive functions (Miyake & Shah 1999; Baddeley 2012; Williams 2012). Such a fractionated view of this cognitive construct manifests itself clearly in distinct strands of WM-language research, where two contrasting research paradigms have emerged (Wen 2012).


2020 ◽  
Author(s):  
Geoffrey Schau ◽  
Erik Burlingame ◽  
Young Hwan Chang

AbstractDeep learning systems have emerged as powerful mechanisms for learning domain translation models. However, in many cases, complete information in one domain is assumed to be necessary for sufficient cross-domain prediction. In this work, we motivate a formal justification for domain-specific information separation in a simple linear case and illustrate that a self-supervised approach enables domain translation between data domains while filtering out domain-specific data features. We introduce a novel approach to identify domainspecific information from sets of unpaired measurements in complementary data domains by considering a deep learning cross-domain autoencoder architecture designed to learn shared latent representations of data while enabling domain translation. We introduce an orthogonal gate block designed to enforce orthogonality of input feature sets by explicitly removing non-sharable information specific to each domain and illustrate separability of domain-specific information on a toy dataset.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Julie Hicks Patrick ◽  
Jenessa C. Steele ◽  
S. Melinda Spencer

The primary aim of this study was to examine the contributions of individual characteristics and strategic processing to the prediction of decision quality. Data were provided by 176 adults, ages 18 to 93 years, who completed computerized decision-making vignettes and a battery of demographic and cognitive measures. We examined the relations among age, domain-specific experience, working memory, and three measures of strategic information search to the prediction of solution quality using a 4-step hierarchical linear regression analysis. Working memory and two measures of strategic processing uniquely contributed to the variance explained. Results are discussed in terms of potential advances to both theory and intervention efforts.


Author(s):  
Arkadipta De ◽  
Dibyanayan Bandyopadhyay ◽  
Baban Gain ◽  
Asif Ekbal

Fake news classification is one of the most interesting problems that has attracted huge attention to the researchers of artificial intelligence, natural language processing, and machine learning (ML). Most of the current works on fake news detection are in the English language, and hence this has limited its widespread usability, especially outside the English literate population. Although there has been a growth in multilingual web content, fake news classification in low-resource languages is still a challenge due to the non-availability of an annotated corpus and tools. This article proposes an effective neural model based on the multilingual Bidirectional Encoder Representations from Transformer (BERT) for domain-agnostic multilingual fake news classification. Large varieties of experiments, including language-specific and domain-specific settings, are conducted. The proposed model achieves high accuracy in domain-specific and domain-agnostic experiments, and it also outperforms the current state-of-the-art models. We perform experiments on zero-shot settings to assess the effectiveness of language-agnostic feature transfer across different languages, showing encouraging results. Cross-domain transfer experiments are also performed to assess language-independent feature transfer of the model. We also offer a multilingual multidomain fake news detection dataset of five languages and seven different domains that could be useful for the research and development in resource-scarce scenarios.


2020 ◽  
pp. 150-174 ◽  
Author(s):  
André Vandierendonck

The working memory model with distributed executive control accounts for the interactions between working memory and multi-tasking performance. The working memory system supports planned actions by relying on two capacity-limited domain-general and two time-limited domain-specific modules. Domain-general modules are the episodic buffer and the executive module. The episodic buffer stores multimodal representations and uses attentional refreshment to counteract information loss and to consolidate information in episodic long-term memory. The executive module maintains domain-general information relevant for the current task. The phonological buffer and the visuospatial module are domain specific; the former uses inner speech to maintain and to rehearse phonological information, whereas the latter holds visual and spatial representations active by means of image revival. For its operation, working memory interacts with declarative and procedural long-term memory, gets input from sensory registers, and uses the motor system for output.


Author(s):  
Slava Kalyuga

One of the major components of our cognitive architecture, working memory, becomes overloaded if more than a few chunks of information are processed simultaneously. For example, we all experience this cognitive overload when trying to keep in memory an unfamiliar telephone number or add two four-digit numbers in the absence of a pen and paper. Similar in nature processing limitations of working memory represent a major factor influencing the effectiveness of human learning and performance, particularly in complex environments that require concurrent performance of multiple tasks. The learner prior domain-specific knowledge structures and associated levels of expertise are considered as means of reducing these limitations and guiding high-level knowledge-based cognitive activities. One of the most important results of studies in human cognition is that the available knowledge is a single most significant learner cognitive characteristic that influences learning and cognitive performance. Understanding the key role of long-term memory knowledge base in our cognition is important to the successful management of cognitive load in multimedia learning.


2020 ◽  
Vol 34 (07) ◽  
pp. 11386-11393 ◽  
Author(s):  
Shuang Li ◽  
Chi Liu ◽  
Qiuxia Lin ◽  
Binhui Xie ◽  
Zhengming Ding ◽  
...  

Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three cross-domain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.


Sign in / Sign up

Export Citation Format

Share Document