Space modulates cross-domain transfer of abstract rules in infants

2022 ◽  
Vol 213 ◽  
pp. 105270
Author(s):  
Hermann Bulf ◽  
Chiara Capparini ◽  
Elena Nava ◽  
Maria Dolores de Hevia ◽  
Viola Macchi Cassia
Keyword(s):  
Author(s):  
Simin Chen ◽  
Haopeng Lei ◽  
Mingwen Wang ◽  
Fan Yang ◽  
Xiangjian He ◽  
...  

Author(s):  
Arkadipta De ◽  
Dibyanayan Bandyopadhyay ◽  
Baban Gain ◽  
Asif Ekbal

Fake news classification is one of the most interesting problems that has attracted huge attention to the researchers of artificial intelligence, natural language processing, and machine learning (ML). Most of the current works on fake news detection are in the English language, and hence this has limited its widespread usability, especially outside the English literate population. Although there has been a growth in multilingual web content, fake news classification in low-resource languages is still a challenge due to the non-availability of an annotated corpus and tools. This article proposes an effective neural model based on the multilingual Bidirectional Encoder Representations from Transformer (BERT) for domain-agnostic multilingual fake news classification. Large varieties of experiments, including language-specific and domain-specific settings, are conducted. The proposed model achieves high accuracy in domain-specific and domain-agnostic experiments, and it also outperforms the current state-of-the-art models. We perform experiments on zero-shot settings to assess the effectiveness of language-agnostic feature transfer across different languages, showing encouraging results. Cross-domain transfer experiments are also performed to assess language-independent feature transfer of the model. We also offer a multilingual multidomain fake news detection dataset of five languages and seven different domains that could be useful for the research and development in resource-scarce scenarios.


Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


2020 ◽  
Vol 5 (3) ◽  
pp. 4148-4155
Author(s):  
Dandan Zhang ◽  
Zicong Wu ◽  
Junhong Chen ◽  
Anzhu Gao ◽  
Xu Chen ◽  
...  

Author(s):  
Xiaoshan Yang ◽  
Tianzhu Zhang ◽  
Changsheng Xu ◽  
Ming-Hsuan Yang
Keyword(s):  

2020 ◽  
pp. 129162
Author(s):  
Ruonan Yi ◽  
Jia Yan ◽  
Debo Shi ◽  
Yutong Tian ◽  
Feiyue Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document