semantic relevance
Recently Published Documents


TOTAL DOCUMENTS

96
(FIVE YEARS 32)

H-INDEX

11
(FIVE YEARS 1)

2022 ◽  
Vol 16 (2) ◽  
pp. 1-37
Author(s):  
Hangbin Zhang ◽  
Raymond K. Wong ◽  
Victor W. Chu

E-commerce platforms heavily rely on automatic personalized recommender systems, e.g., collaborative filtering models, to improve customer experience. Some hybrid models have been proposed recently to address the deficiency of existing models. However, their performances drop significantly when the dataset is sparse. Most of the recent works failed to fully address this shortcoming. At most, some of them only tried to alleviate the problem by considering either user side or item side content information. In this article, we propose a novel recommender model called Hybrid Variational Autoencoder (HVAE) to improve the performance on sparse datasets. Different from the existing approaches, we encode both user and item information into a latent space for semantic relevance measurement. In parallel, we utilize collaborative filtering to find the implicit factors of users and items, and combine their outputs to deliver a hybrid solution. In addition, we compare the performance of Gaussian distribution and multinomial distribution in learning the representations of the textual data. Our experiment results show that HVAE is able to significantly outperform state-of-the-art models with robust performance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yang Yang ◽  
Juan Cao ◽  
Yujun Wen ◽  
Pengzhou Zhang

AbstractGenerating fluent, coherent, and informative text from structured data is called table-to-text generation. Copying words from the table is a common method to solve the “out-of-vocabulary” problem, but it’s difficult to achieve accurate copying. In order to overcome this problem, we invent an auto-regressive framework based on the transformer that combines a copying mechanism and language modeling to generate target texts. Firstly, to make the model better learn the semantic relevance between table and text, we apply a word transformation method, which incorporates the field and position information into the target text to acquire the position of where to copy. Then we propose two auxiliary learning objectives, namely table-text constraint loss and copy loss. Table-text constraint loss is used to effectively model table inputs, whereas copy loss is exploited to precisely copy word fragments from a table. Furthermore, we improve the text search strategy to reduce the probability of generating incoherent and repetitive sentences. The model is verified by experiments on two datasets and better results are obtained than the baseline model. On WIKIBIO, the result is improved from 45.47 to 46.87 on BLEU and from 41.54 to 42.28 on ROUGE. On ROTOWIRE, the result is increased by 4.29% on CO metric, and 1.93 points higher on BLEU.


2021 ◽  
Vol 39 (4) ◽  
pp. 1-28
Author(s):  
Ruijian Xu ◽  
Chongyang Tao ◽  
Jiazhan Feng ◽  
Wei Wu ◽  
Rui Yan ◽  
...  

Building an intelligent dialogue system with the ability to select a proper response according to a multi-turn context is challenging in three aspects: (1) the meaning of a context–response pair is built upon language units from multiple granularities (e.g., words, phrases, and sub-sentences, etc.); (2) local (e.g., a small window around a word) and long-range (e.g., words across the context and the response) dependencies may exist in dialogue data; and (3) the relationship between the context and the response candidate lies in multiple relevant semantic clues or relatively implicit semantic clues in some real cases. However, existing approaches usually encode the dialogue with mono-type representation and the interaction processes between the context and the response candidate are executed in a rather shallow manner, which may lead to an inadequate understanding of dialogue content and hinder the recognition of the semantic relevance between the context and response. To tackle these challenges, we propose a representation [ K ] -interaction [ L ] -matching framework that explores multiple types of deep interactive representations to build context-response matching models for response selection. Particularly, we construct different types of representations for utterance–response pairs and deepen them via alternate encoding and interaction. By this means, the model can handle the relation of neighboring elements, phrasal pattern, and long-range dependencies during the representation and make a more accurate prediction through multiple layers of interactions between the context–response pair. Experiment results on three public benchmarks indicate that the proposed model significantly outperforms previous conventional context-response matching models and achieve slightly better results than the BERT model for multi-turn response selection in retrieval-based dialogue systems.


2021 ◽  
Author(s):  
XIU LONG YI ◽  
YOU FU ◽  
DU LEI ZHENG ◽  
XIAO PENG LIU ◽  
RONG HUA

Abstract As cross-domain research combining computer vision and natural language processing, the current image captioning research mainly considers how to improve the visual features, less attention has been paid to utilizing the inherent properties of language to boost captioning performance. Facing this challenge, we proposed a textual attention mechanism, which can obtain semantic relevance between words by scanning all generated words. The Retrospect Network for image captioning(RNIC) proposed in this paper aims to improve input and prediction process by using textual attention. Concretely, the textual attention mechanism is applied to the model simultaneously with the visual attention mechanism to provide the input of the model with the maximum information required for generating captions. In this way, our model can learn to collaboratively attend on both visual and textual features. Moreover, the semantic relevance between words obtained by retrospect is used as the basis for prediction, so that the decoder can simulate the human language system and better make predictions based on the already generated contents. We evaluate the effectiveness of our model on the COCO image captioning datasets and achieve superior performance overthe previous methods.extraction function to extract the hidden unit information of multiple time steps for prediction, to solve the problem of insufficient LSTM prediction information. Experiments have shown that both model significantly improved the various evaluation indicators in the AI CHALLENGER test set.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Ching-Yi Wang ◽  
Yu-Er Lin

Abstract Background This study investigated the impact of semantic relevance on the ability to comprehend the appearance and function of a product, as presented in images. Methods The images used the constructs of Simile, Metaphor and Analogy to correspond to congruent, related and incongruent semantic structures, and measured the amplitude of Event-Related Potentials (ERPs) to compare these images with Landscape images. Sixteen participants with design-related educational backgrounds were invited to join in the ERP experiment. Results The results found that the image depicting the Metaphor showed a stronger N600 amplitude in the right anterior region of the brain than the Landscape image and the Analogy image induced a stronger N600 effect in the left anterior and right anterior part of the brain than the Landscape image. However, the Simile image did not trigger the N600. The N600 was triggered when the meaning of the Metaphor and Analogy being presented could not be understood. This indicates that a greater processing effort to comprehend them than was required for Simile. Analogy has a wider N600 distribution than Metaphor in the anterior area, suggesting that Analogy would require higher-level thinking processes and more complex semantic processing mechanisms than Metaphor. Conclusions The N600 implicated that an assessment method to detect the semantic relationship between appearance and function of a product would assist in determining whether a symbol was suitable to be associated with a product.


Author(s):  
Qianli Xu ◽  
Ana Garcia Del Molino ◽  
Jie Lin ◽  
Fen Fang ◽  
Vigneshwaran Subbaraju ◽  
...  

Lifelog analytics is an emerging research area with technologies embracing the latest advances in machine learning, wearable computing, and data analytics. However, state-of-the-art technologies are still inadequate to distill voluminous multimodal lifelog data into high quality insights. In this article, we propose a novel semantic relevance mapping ( SRM ) method to tackle the problem of lifelog information access. We formulate lifelog image retrieval as a series of mapping processes where a semantic gap exists for relating basic semantic attributes with high-level query topics. The SRM serves both as a formalism to construct a trainable model to bridge the semantic gap and an algorithm to implement the training process on real-world lifelog data. Based on the SRM, we propose a computational framework of lifelog analytics to support various applications of lifelog information access, such as image retrieval, summarization, and insight visualization. Systematic evaluations are performed on three challenging benchmarking tasks to show the effectiveness of our method.


2021 ◽  
Author(s):  
Catherine Davies ◽  
Anna Richardson

A range of studies investigating how overspecified referring expressions (e.g., the stripy cup to describe a single cup) affect referent identification have found it to slow identification, speed it up, or yield no effect on processing speed. To date, these studies have all used adjectives that are semantically arbitrary within the sentential context.In addition to the standard ‘informativeness’ design that manipulates the presence of contrast sets, we controlled the semantic relevance of adjectives in discourse to reveal whether overspecifying adjectives would affect processing when relevant to the context (fed the hungry rabbit) compared to when they are not (tickled the hungry rabbit). Using a self-paced reading paradigm with a sample of adult participants (N=31), we found that overspecified noun phrases were read more slowly than those that distinguished a member of a contrast set. Importantly, this penalty was mitigated when adjectives were semantically relevant.Contrary to classical approaches, we show that modifiers do not necessarily presuppose a set, and that referential and semantic information is integrated rapidly in pragmatic processing. Our data support Fukumura and van Gompel’s (2017) meaning-based redundancy hypothesis, which predicts that it is the specific semantic representation of the overspecifying adjective that determines whether a penalty is incurred, rather than generic Gricean expectations. We extend this account using a novel experimental design.


2021 ◽  
Vol 13 (3) ◽  
pp. 64
Author(s):  
Jie Yu ◽  
Yaliu Li ◽  
Chenle Pan ◽  
Junwei Wang

Classification of resource can help us effectively reduce the work of filtering massive academic resources, such as selecting relevant papers and focusing on the latest research by scholars in the same field. However, existing graph neural networks do not take into account the associations between academic resources, leading to unsatisfactory classification results. In this paper, we propose an Association Content Graph Attention Network (ACGAT), which is based on the association features and content attributes of academic resources. The semantic relevance and academic relevance are introduced into the model. The ACGAT makes full use of the association commonality and the influence information of resources and introduces an attention mechanism to improve the accuracy of academic resource classification. We conducted experiments on a self-built scholar network and two public citation networks. Experimental results show that the ACGAT has better effectiveness than existing classification methods.


Sign in / Sign up

Export Citation Format

Share Document