attention networks
Recently Published Documents





2022 ◽  
Vol 40 (3) ◽  
pp. 1-30
Zhiwen Xie ◽  
Runjie Zhu ◽  
Kunsong Zhao ◽  
Jin Liu ◽  
Guangyou Zhou ◽  

Cross-lingual entity alignment has attracted considerable attention in recent years. Past studies using conventional approaches to match entities share the common problem of missing important structural information beyond entities in the modeling process. This allows graph neural network models to step in. Most existing graph neural network approaches model individual knowledge graphs (KGs) separately with a small amount of pre-aligned entities served as anchors to connect different KG embedding spaces. However, this characteristic can cause several major problems, including performance restraint due to the insufficiency of available seed alignments and ignorance of pre-aligned links that are useful in contextual information in-between nodes. In this article, we propose DuGa-DIT, a dual gated graph attention network with dynamic iterative training, to address these problems in a unified model. The DuGa-DIT model captures neighborhood and cross-KG alignment features by using intra-KG attention and cross-KG attention layers. With the dynamic iterative process, we can dynamically update the cross-KG attention score matrices, which enables our model to capture more cross-KG information. We conduct extensive experiments on two benchmark datasets and a case study in cross-lingual personalized search. Our experimental results demonstrate that DuGa-DIT outperforms state-of-the-art methods.

2022 ◽  
Vol 11 (2) ◽  
pp. 0-0

In the recent times transfer learning models have known to exhibited good results in the area of text classification for question-answering, summarization, next word prediction but these learning models have not been extensively used for the problem of hate speech detection yet. We anticipate that these networks may give better results in another task of text classification i.e. hate speech detection. This paper introduces a novel method of hate speech detection based on the concept of attention networks using the BERT attention model. We have conducted exhaustive experiments and evaluation over publicly available datasets using various evaluation metrics (precision, recall and F1 score). We show that our model outperforms all the state-of-the-art methods by almost 4%. We have also discussed in detail the technical challenges faced during the implementation of the proposed model.

Changmeng Peng ◽  
Pei Shu ◽  
Xiaoyang Huang ◽  
Zhizhong Fu ◽  
Xiaofeng Li

2022 ◽  
Vol 12 ◽  
Olivia Campbell ◽  
Tamara Vanderwal ◽  
Alexander Mark Weber

Background: Temporal fractals are characterized by prominent scale-invariance and self-similarity across time scales. Monofractal analysis quantifies this scaling behavior in a single parameter, the Hurst exponent (H). Higher H reflects greater correlation in the signal structure, which is taken as being more fractal. Previous fMRI studies have observed lower H during conventional tasks relative to resting state conditions, and shown that H is negatively correlated with task difficulty and novelty. To date, no study has investigated the fractal dynamics of BOLD signal during naturalistic conditions.Methods: We performed fractal analysis on Human Connectome Project 7T fMRI data (n = 72, 41 females, mean age 29.46 ± 3.76 years) to compare H across movie-watching and rest.Results: In contrast to previous work using conventional tasks, we found higher H values for movie relative to rest (mean difference = 0.014; p = 5.279 × 10−7; 95% CI [0.009, 0.019]). H was significantly higher in movie than rest in the visual, somatomotor and dorsal attention networks, but was significantly lower during movie in the frontoparietal and default networks. We found no cross-condition differences in test-retest reliability of H. Finally, we found that H of movie-derived stimulus properties (e.g., luminance changes) were fractal whereas H of head motion estimates were non-fractal.Conclusions: Overall, our findings suggest that movie-watching induces fractal signal dynamics. In line with recent work characterizing connectivity-based brain state dynamics during movie-watching, we speculate that these fractal dynamics reflect the configuring and reconfiguring of brain states that occurs during naturalistic processing, and are markedly different than dynamics observed during conventional tasks.

2022 ◽  
Vol 5 (1) ◽  
Hyebin Lee ◽  
Junmo Kwon ◽  
Jong-eun Lee ◽  
Bo-yong Park ◽  
Hyunjin Park

AbstractFunctional hierarchy establishes core axes of the brain, and overweight individuals show alterations in the networks anchored on these axes, particularly in those involved in sensory and cognitive control systems. However, quantitative assessments of hierarchical brain organization in overweight individuals are lacking. Capitalizing stepwise functional connectivity analysis, we assess altered functional connectivity in overweight individuals relative to healthy weight controls along the brain hierarchy. Seeding from the brain regions associated with obesity phenotypes, we conduct stepwise connectivity analysis at different step distances and compare functional degrees between the groups. We find strong functional connectivity in the somatomotor and prefrontal cortices in both groups, and both converge to transmodal systems, including frontoparietal and default-mode networks, as the number of steps increased. Conversely, compared with the healthy weight group, overweight individuals show a marked decrease in functional degree in somatosensory and attention networks across the steps, whereas visual and limbic networks show an increasing trend. Associating functional degree with eating behaviors, we observe negative associations between functional degrees in sensory networks and hunger and disinhibition-related behaviors. Our findings suggest that overweight individuals show disrupted functional network organization along the hierarchical axis of the brain and these results provide insights for behavioral associations.

2022 ◽  
Konrad Bresin ◽  
Yara Mekawi ◽  
Julia Blayne McDonald ◽  
Melanie Bozzay ◽  
Wendy Heller ◽  

Research identifying the biobehavioral processes that link threat exposure to cognitive alterations can inform treatments designed to reduce perpetration of stress-induced aggression. The present study attempted to specify the effects of relatively predictable (acute) vs unpredictable (diffuse) threat on two theoretically relevant attention networks, attentional alerting and executive control; and to examine the extent to which aggression proneness moderated those effects. In a sample with high rates of externalizing behaviors (n = 74), we measured event-related brain activity during an attention network test that manipulated cognitive systems activation under distinct contexts of threat (NPU manipulation). The first set of results confirmed that threat exposure alters alerting and executive control. The predictable threat condition, relative to unpredictable threat, increased visual alerting (alert cue N1) and decreased attention (P3) to subsequent task-relevant stimuli (flanker). In contrast, overall threat and unpredictable threat conditions were associated with alerting-related quicker responding and poorer conflict resolution (congruence-related flanker N2 reductions and RT interference). The second set of results indicated that different operationalizations of aggression proneness were inconsistently related to threat-related alterations in cognitive systems. While these results regarding threat-related cognitive alterations in aggression require more study, they nevertheless expand what is known about threat-related modulation of cognition in a sample of individuals with histories of externalizing behaviors.

2022 ◽  
Jing Zhao ◽  
Junkai Wang ◽  
Chen Huang ◽  
Peipeng Liang

2022 ◽  
Meizhan Liu ◽  
Fengyu Zhou ◽  
JiaKai He ◽  
Ke Chen ◽  
Yang Zhao ◽  

Abstract Aspect-level sentiment classification aims to integrating the context to predict the sentiment polarity of aspect-specific in a text, which has been quite useful and popular, e.g. opinion survey and products’ recommending in e-commerce. Many recent studies exploit a Long Short-Term Memory (LSTM) networks to perform aspect-level sentiment classification, but the limitation of long-term dependencies is not solved well, so that the semantic correlations between each two words of the text are ignored. In addition, traditional classification model adopts SoftMax function based on probability statistics as classifier, but ignores the words’ features in the semantic space. Support Vector Machine (SVM) can fully use the information of characteristics and it is appropriate to make classification in the high dimension space, however which just considers the maximum distance between different classes and ignores the similarities between different features of the same classes. To address these defects, we propose the two-stages novel architecture named Self Attention Networks and Adaptive SVM (SAN-ASVM) for aspect-level sentiment classification. In the first-stage, in order to overcome the long-term dependencies, Multi-Heads Self Attention (MHSA) mechanism is applied to extract the semantic relationships between each two words, furthermore 1-hop attention mechanism is designed to pay more attention on some important words related to aspect-specific. In the second-stage, ASVM is designed to substitute the SoftMax function to perform sentiment classification, which can effectively make multi-classifications in high dimensional space. Extensive experiments on SemEval2014, SemEval2016 and Twitter datasets are conducted, compared experiments prove that SAN-ASVM model can obtains better performance.

Sign in / Sign up

Export Citation Format

Share Document