Cross-lingual entity alignment has attracted considerable attention in recent years. Past studies using conventional approaches to match entities share the common problem of missing important structural information beyond entities in the modeling process. This allows graph neural network models to step in. Most existing graph neural network approaches model individual knowledge graphs (KGs) separately with a small amount of pre-aligned entities served as anchors to connect different KG embedding spaces. However, this characteristic can cause several major problems, including performance restraint due to the insufficiency of available seed alignments and ignorance of pre-aligned links that are useful in contextual information in-between nodes. In this article, we propose DuGa-DIT, a dual gated graph attention network with dynamic iterative training, to address these problems in a unified model. The DuGa-DIT model captures neighborhood and cross-KG alignment features by using intra-KG attention and cross-KG attention layers. With the dynamic iterative process, we can dynamically update the cross-KG attention score matrices, which enables our model to capture more cross-KG information. We conduct extensive experiments on two benchmark datasets and a case study in cross-lingual personalized search. Our experimental results demonstrate that DuGa-DIT outperforms state-of-the-art methods.
We study the effect of contextual information obtained from a user’s digital trace on Web search performance. Contextual information is modeled using Dirichlet–Hawkes processes (DHP) and used in augmenting Web search queries. The context is captured by monitoring all naturally occurring user behavior using continuous 24/7 recordings of the screen and associating the context with the queries issued by the users. We report a field study in which 13 participants installed a screen recording and digital activity monitoring system on their laptops for 14 days, resulting in data on all Web search queries and the associated context data. A query augmentation (QAug) model was built to expand the original query with semantically related terms. The effects of context window and source were determined by training context models with temporally varying context windows and varying application sources. The context models were then utilized to re-rank the QAug model. We evaluate the context models by using the Web document rankings of the original query as a control condition compared against various experimental conditions: (1) a search context condition in which the context was sourced from search history; (2) a non-search context condition in which the context was sourced from all interactions excluding search history; (3) a comprehensive context condition in which the context was sourced from both search and non-search histories; and (4) an application-specific condition in which the context was sourced from interaction histories captured on a specific application type. Our results indicated that incorporating more contextual information significantly improved Web search rankings as measured by the positions of the documents on which users clicked in the search result pages. The effects and importance of different context windows and application sources, along with different query types are analyzed, and their impact on Web search performance is discussed.
A session-based recommender system (SBRS) captures users’ evolving behaviors and recommends the next item by profiling users in terms of items in a session. User intent and user preference are two factors affecting his (her) decisions. Specifically, the former narrows the selection scope to some item types, while the latter helps to compare items of the same type. Most SBRSs assume one arbitrary user intent dominates a session when making a recommendation. However, this oversimplifies the reality that a session may involve multiple types of items conforming to different intents. In current SBRSs, items conforming to different user intents have cross-interference in profiling users for whom only one user intent is considered. Explicitly identifying and differentiating items conforming to various user intents can address this issue and model rich contextual information of a session. To this end, we design a framework modeling user intent and preference explicitly, which empowers the two factors to play their distinctive roles. Accordingly, we propose a key-array memory network (KA-MemNN) with a hierarchical intent tree to model coarse-to-fine user intents. The two-layer weighting unit (TLWU) in KA-MemNN detects user intents and generates intent-specific user profiles. Furthermore, the hierarchical semantic component (HSC) integrates multiple sets of intent-specific user profiles along with different user intent distributions to model a multi-intent user profile. The experimental results on real-world datasets demonstrate the superiority of KA-MemNN over selected state-of-the-art methods.
An increasing number of detection methods based on computer vision are applied to detect cracks in water conservancy infrastructure. However, most studies directly use existing feature extraction networks to extract cracks information, which are proposed for open-source datasets. As the cracks distribution and pixel features are different from these data, the extracted cracks information is incomplete. In this paper, a deep learning-based network for dam surface crack detection is proposed, which mainly addresses the semantic segmentation of cracks on the dam surface. Particularly, we design a shallow encoding network to extract features of crack images based on the statistical analysis of cracks. Further, to enhance the relevance of contextual information, we introduce an attention module into the decoding network. During the training, we use the sum of Cross-Entropy and Dice Loss as the loss function to overcome data imbalance. The quantitative information of cracks is extracted by the imaging principle after using morphological algorithms to extract the morphological features of the predicted result. We built a manual annotation dataset containing 1577 images to verify the effectiveness of the proposed method. This method achieves the state-of-the-art performance on our dataset. Specifically, the precision, recall, IoU, F1_measure, and accuracy achieve 90.81%, 81.54%, 75.23%, 85.93%, 99.76%, respectively. And the quantization error of cracks is less than 4%.
Projectile technology is commonly viewed as a significant contributor to past human subsistence and, consequently, to our evolution. Due to the allegedly central role of projectile weapons in the food-getting strategies of Upper Palaeolithic people, typo-technological changes in the European lithic record have often been linked to supposed developments in hunting weaponry. Yet, relatively little reliable functional data is currently available that would aid the detailed reconstruction of past weapon designs. In this paper, we take a use-wear approach to the backed tool assemblages from the Recent and Final Gravettian layers (Levels 3 and 2) of Abri Pataud (Dordogne, France). Our use of strict projectile identification criteria relying on combinations of low and high magnification features and our critical view of the overlap between production and use-related fractures permitted us to confidently identify a large number of used armatures in both collections. By isolating lithic projectiles with the strongest evidence of impact and by recording wear attributes on them in detail, we could establish that the hunting equipment used during the Level 3 occupations involved both lithic weapon tips and composite points armed with lithic inserts. By contrast, the Level 2 assemblage reflects a heavy reliance on composite points in hunting reindeer and other game. Instead of an entirely new weapon design, the Level 2 collection therefore marks a shift in weapon preferences. Using recent faunal data, we discuss the significance of the observed diachronic change from the point of view of prey choice, seasonality, and social organisation of hunting activities. Our analysis shows that to understand their behavioural significance, typo-technological changes in the lithic record must be viewed in the light of functional data and detailed contextual information.
Business competency emerges in flexibility and reliability of services that an enterprise provides. To reach that, executing business processes on a context-aware business process management suite which is equipped with monitoring, modeling and adaptation mechanisms and smart enough to react properly using adaptation strategies at runtime, are a major requisite. In this paper, a context-aware architecture is described to bring adaptation to common business process execution software. The architecture comes with the how-to-apply methodology and is established based on process standards like business process modeling notation (BPMN), business process execution language (BPEL), etc. It follows MAPE-K adaptation cycle in which the knowledge, specifically contextual information and their related semantic rules — as the input of adaptation unit — is modeled in our innovative context ontology, which is also extensible for domain-specific purposes. Furthermore, to support separation of concerns, we took apart event-driven adaptation requirements from process instances; these requirements are triggered based on ontology reasoning. Also, the architecture supports fuzzy-based planning and extensible adaptation realization mechanisms to face new or changing situations adequately. We characterized our work in comparison with related studies based on five key adaptation metrics and also evaluated it using an online learning management system case study.
Current two-stage object detectors extract the local visual features of Regions of Interest (RoIs) for object recognition and bounding-box regression. However, only using local visual features will lose global contextual dependencies, which are helpful to recognize objects with featureless appearances and restrain false detections. To tackle the problem, a simple framework, named Global Contextual Dependency Network (GCDN), is presented to enhance the classification ability of two-stage detectors. Our GCDN mainly consists of two components, Context Representation Module (CRM) and Context Dependency Module (CDM). Specifically, a CRM is proposed to construct multi-scale context representations. With CRM, contextual information can be fully explored at different scales. Moreover, the CDM is designed to capture global contextual dependencies. Our GCDN includes multiple CDMs. Each CDM utilizes local Region of Interest (RoI) features and single-scale context representation to generate single-scale contextual RoI features via the attention mechanism. Finally, the contextual RoI features generated by parallel CDMs independently are combined with the original RoI features to help classification. Experiments on MS-COCO 2017 benchmark dataset show that our approach brings continuous improvements for two-stage detectors.
AbstractTo date, it is still unclear whether there is a systematic pattern in the errors made in eyewitness recall and whether certain features of a person are more likely to lead to false identification. Moreover, we also do not know the extent of systematic errors impacting identification of a person from their body rather than solely their face. To address this, based on the contextual model of eyewitness identification (CMEI; Osborne & Davies, 2014, Applied Cognitive Psychology, 28, 392–402), we hypothesized that having framed a target as a perpetrator of a violent crime, participants would recall that target person as appearing more like a stereotypical criminal (i.e., more threatening). In three separate experiments, participants were first presented with either no frame, a neutral frame, or a criminal frame (perpetrators of a violent crime) accompanying a target (either a face or body). Participants were then asked to identify the original target from a selection of people that varied in facial threat or body musculature. Contrary to our hypotheses, we found no evidence of bias. However, identification accuracy was highest for the most threatening target bodies high in musculature, as well as bodies paired with detailed neutral contextual information. Overall, these findings suggest that while no systematic bias exists in the recall of criminal bodies, the nature of the body itself and the context in which it is presented can significantly impact identification accuracy.
Up to now, most of the forensics methods have attached more attention to natural content images. To expand the application of image forensics technology, forgery detection for certificate images that can directly represent people’s rights and interests is investigated in this paper. Variable tampered region scales and diverse manipulation types are two typical characteristics in fake certificate images. To tackle this task, a novel method called Multi-level Feature Attention Network (MFAN) is proposed. MFAN is built following the encoder–decoder network structure. In order to extract features with rich scale information in the encoder, on the one hand, we employ Atrous Spatial Pyramid Pooling (ASPP) on the final layer of a pre-trained residual network to capture the contextual information at different scales; on the other hand, low-level features are concatenated to ensure the sensibility to small targets. Furthermore, the resulting multi-level features are recalibrated on channels for irrelevant information suppression and enhancing the tampered regions, guiding the MFAN to adapt to diverse manipulation traces. In the decoder module, the attentive feature maps are convoluted and unsampled to effectively generate the prediction mask. Experimental results indicate that the proposed method outperforms some state-of-the-art forensics methods.
The article discusses various aspects of the influence of bias on the formation of conclusions of a forensic expert. The author highlights that the negative effect of bias is especially significant in identification examinations, where the conclusions are based on subjective interpretations of the results of marks comparison (toolmark, fingerprint, firearms examinations, and others). The author also notes that there is no clear border between objectivity and subjectivity in forensic examinations. All types of forensic examinations exist in an objective-subjective continuum, which causes different conclusions’ reliability. Since subjectivity is the basis for bias formation, minimizing its impact can be achieved in several ways – increasing the “transparency” of documenting the research process, technical analysis and verification of an expert’s opinion, applying quantitative criteria for evaluating the matching features in the compared marks. The most logical way to reduce the influence of bias is to eliminate the causes that give rise to this phenomenon. These are the excessive contextual information provided to the expert, the expert’s deviation from the requirements of methodological recommendations in examining the objects, and various external and internal influences.