scholarly journals Discrete Social Recommendation

Author(s):  
Chenghao Liu ◽  
Xin Wang ◽  
Tao Lu ◽  
Wenwu Zhu ◽  
Jianling Sun ◽  
...  

Social recommendation, which aims at improving the performance of traditional recommender systems by considering social information, has attracted broad range of interests. As one of the most widely used methods, matrix factorization typically uses continuous vectors to represent user/item latent features. However, the large volume of user/item latent features results in expensive storage and computation cost, particularly on terminal user devices where the computation resource to operate model is very limited. Thus when taking extra social information into account, precisely extracting K most relevant items for a given user from massive candidates tends to consume even more time and memory, which imposes formidable challenges for efficient and accurate recommendations. A promising way is to simply binarize the latent features (obtained in the training phase) and then compute the relevance score through Hamming distance. However, such a two-stage hashing based learning procedure is not capable of preserving the original data geometry in the real-value space and may result in a severe quantization loss. To address these issues, this work proposes a novel discrete social recommendation (DSR) method which learns binary codes in a unified framework for users and items, considering social information. We further put the balanced and uncorrelated constraints on the objective to ensure the learned binary codes can be informative yet compact, and finally develop an efficient optimization algorithm to estimate the model parameters. Extensive experiments on three real-world datasets demonstrate that DSR runs nearly 5 times faster and consumes only with 1/37 of its real-value competitor’s memory usage at the cost of almost no loss in accuracy.

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5755
Author(s):  
Pei Zhang ◽  
Siwei Wang ◽  
Jingtao Hu ◽  
Zhen Cheng ◽  
Xifeng Guo ◽  
...  

With the enormous amount of multi-source data produced by various sensors and feature extraction approaches, multi-view clustering (MVC) has attracted developing research attention and is widely exploited in data analysis. Most of the existing multi-view clustering methods hold on the assumption that all of the views are complete. However, in many real scenarios, multi-view data are often incomplete for many reasons, e.g., hardware failure or incomplete data collection. In this paper, we propose an adaptive weighted graph fusion incomplete multi-view subspace clustering (AWGF-IMSC) method to solve the incomplete multi-view clustering problem. Firstly, to eliminate the noise existing in the original space, we transform complete original data into latent representations which contribute to better graph construction for each view. Then, we incorporate feature extraction and incomplete graph fusion into a unified framework, whereas two processes can negotiate with each other, serving for graph learning tasks. A sparse regularization is imposed on the complete graph to make it more robust to the view-inconsistency. Besides, the importance of different views is automatically learned, further guiding the construction of the complete graph. An effective iterative algorithm is proposed to solve the resulting optimization problem with convergence. Compared with the existing state-of-the-art methods, the experiment results on several real-world datasets demonstrate the effectiveness and advancement of our proposed method.


2008 ◽  
Vol 47 (04) ◽  
pp. 322-327 ◽  
Author(s):  
D. Blokh ◽  
N. Zurgil ◽  
I. Stambler ◽  
E. Afrimzon ◽  
Y. Shafran ◽  
...  

Summary Objectives: Formal diagnostic modeling is an important line of modern biological and medical research. The construction of a formal diagnostic model consists of two stages: first, the estimation of correlation between model parameters and the disease under consideration; and second, the construction of a diagnostic decision rule using these correlation estimates. A serious drawback of current diagnostic models is the absence of a unified mathematical methodological approach to implementing these two stages. The absence of aunified approach makesthe theoretical/biomedical substantiation of diagnostic rules difficult and reduces the efficacyofactual diagnostic model application. Methods: The present study constructs a formal model for breast cancer detection. The diagnostic model is based on information theory. Normalized mutual information is chosen as the measure of relevance between parameters and the patterns studied. The “nearest neighbor” rule is utilized for diagnosis, while the distance between elements is the weighted Hamming distance. The model concomitantly employs cellular fluorescence polarization as the quantitative input parameter and cell receptor expression as qualitative parameters. Results: Twenty-four healthy individuals and 34 patients (not including the subjects analyzed for the model construction) were tested by the model. Twenty-three healthy subjects and 34 patients were correctly diagnosed. Conclusions: The proposed diagnostic model is an open one,i.e.it can accommodate new additional parameters, which may increase its effectiveness.


Author(s):  
Shengsheng Qian ◽  
Jun Hu ◽  
Quan Fang ◽  
Changsheng Xu

In this article, we focus on fake news detection task and aim to automatically identify the fake news from vast amount of social media posts. To date, many approaches have been proposed to detect fake news, which includes traditional learning methods and deep learning-based models. However, there are three existing challenges: (i) How to represent social media posts effectively, since the post content is various and highly complicated; (ii) how to propose a data-driven method to increase the flexibility of the model to deal with the samples in different contexts and news backgrounds; and (iii) how to fully utilize the additional auxiliary information (the background knowledge and multi-modal information) of posts for better representation learning. To tackle the above challenges, we propose a novel Knowledge-aware Multi-modal Adaptive Graph Convolutional Networks (KMAGCN) to capture the semantic representations by jointly modeling the textual information, knowledge concepts, and visual information into a unified framework for fake news detection. We model posts as graphs and use a knowledge-aware multi-modal adaptive graph learning principal for the effective feature learning. Compared with existing methods, the proposed KMAGCN addresses challenges from three aspects: (1) It models posts as graphs to capture the non-consecutive and long-range semantic relations; (2) it proposes a novel adaptive graph convolutional network to handle the variability of graph data; and (3) it leverages textual information, knowledge concepts and visual information jointly for model learning. We have conducted extensive experiments on three public real-world datasets and superior results demonstrate the effectiveness of KMAGCN compared with other state-of-the-art algorithms.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1005
Author(s):  
Rakan A. Alsowail ◽  
Taher Al-Shehari

As technologies are rapidly evolving and becoming a crucial part of our lives, security and privacy issues have been increasing significantly. Public and private organizations have highly confidential data, such as bank accounts, military and business secrets, etc. Currently, the competition between organizations is significantly higher than before, which triggers sensitive organizations to spend an excessive volume of their budget to keep their assets secured from potential threats. Insider threats are more dangerous than external ones, as insiders have a legitimate access to their organization’s assets. Thus, previous approaches focused on some individual factors to address insider threat problems (e.g., technical profiling), but a broader integrative perspective is needed. In this paper, we propose a unified framework that incorporates various factors of the insider threat context (technical, psychological, behavioral and cognitive). The framework is based on a multi-tiered approach that encompasses pre, in and post-countermeasures to address insider threats in an all-encompassing perspective. It considers multiple factors that surround the lifespan of insiders’ employment, from the pre-joining of insiders to an organization until after they leave. The framework is utilized on real-world insider threat cases. It is also compared with previous work to highlight how our framework extends and complements the existing frameworks. The real value of our framework is that it brings together the various aspects of insider threat problems based on real-world cases and relevant literature. This can therefore act as a platform for general understanding of insider threat problems, and pave the way to model a holistic insider threat prevention system.


2010 ◽  
Vol 2 (3) ◽  
pp. 489
Author(s):  
M. Basu ◽  
S. Bagchi

The minimum average Hamming distance of binary codes of length n and cardinality M is denoted by b(n,M). All the known lower bounds b(n,M) are useful when M is at least of size about 2n-1/n . In this paper, for large n, we improve upper and lower bounds for b(n,M). Keywords: Binary code; Hamming distance; Minimum average Hamming distance. © 2010 JSR Publications. ISSN: 2070-0237 (Print); 2070-0245 (Online). All rights reserved. DOI: 10.3329/jsr.v2i3.2708                  J. Sci. Res. 2 (3), 489-493 (2010) 


2019 ◽  
Vol Volume 27 - 2017 - Special... ◽  
Author(s):  
Abir Gorrab ◽  
Ferihane Kboubi ◽  
Henda Ghézala

The explosion of web 2.0 and social networks has created an enormous and rewarding source of information that has motivated researchers in different fields to exploit it. Our work revolves around the issue of access and identification of social information and their use in building a user profile enriched with a social dimension, and operating in a process of personalization and recommendation. We study several approaches of Social IR (Information Retrieval), distinguished by the type of incorporated social information. We also study various social recommendation approaches classified by the type of recommendation. We then present a study of techniques for modeling the social user profile dimension, followed by a critical discussion. Thus, we propose our social recommendation approach integrating an advanced social user profile model. L’explosion du web 2.0 et des réseaux sociaux a crée une source d’information énorme et enrichissante qui a motivé les chercheurs dans différents domaines à l’exploiter. Notre travail s’articule autour de la problématique d’accès et d’identification des informations sociales et leur exploitation dans la construction d’un profil utilisateur enrichi d’une dimension sociale, et son exploitation dans un processus de personnalisation et de recommandation. Nous étudions différentes approches sociales de RI (Recherche d’Information), distinguées par le type d’informations sociales incorporées. Nous étudions également diverses approches de recommandation sociale classées par le type de recommandation. Nous exposons ensuite une étude des techniques de modélisation de la dimension sociale du profil utilisateur, suivie par une discussion critique. Ainsi, nous présentons notre approche de recommandation sociale proposée intégrant un modèle avancé de profil utilisateur social.


Author(s):  
Lixin Fan ◽  
Kam Woh Ng ◽  
Ce Ju ◽  
Tianyu Zhang ◽  
Chee Seng Chan

This paper proposes a novel deep polarized network (DPN) for learning to hash, in which each channel in the network outputs is pushed far away from zero by employing a differentiable bit-wise hinge-like loss which is dubbed as polarization loss. Reformulated within a generic Hamming Distance Metric Learning framework [Norouzi et al., 2012], the proposed polarization loss bypasses the requirement to prepare pairwise labels for (dis-)similar items and, yet, the proposed loss strictly bounds from above the pairwise Hamming Distance based losses. The intrinsic connection between pairwise and pointwise label information, as disclosed in this paper, brings about the following methodological improvements: (a) we may directly employ the proposed differentiable polarization loss with no large deviations incurred from the target Hamming distance based loss; and (b) the subtask of assigning binary codes becomes extremely simple --- even random codes assigned to each class suffice to result in state-of-the-art performances, as demonstrated in CIFAR10, NUS-WIDE and ImageNet100 datasets.


2020 ◽  
Vol 34 (05) ◽  
pp. 9410-9417
Author(s):  
Min Yang ◽  
Chengming Li ◽  
Fei Sun ◽  
Zhou Zhao ◽  
Ying Shen ◽  
...  

Real-time event summarization is an essential task in natural language processing and information retrieval areas. Despite the progress of previous work, generating relevant, non-redundant, and timely event summaries remains challenging in practice. In this paper, we propose a Deep Reinforcement learning framework for real-time Event Summarization (DRES), which shows promising performance for resolving all three challenges (i.e., relevance, non-redundancy, timeliness) in a unified framework. Specifically, we (i) devise a hierarchical cross-attention network with intra- and inter-document attentions to integrate important semantic features within and between the query and input document for better text matching. In addition, relevance prediction is leveraged as an auxiliary task to strengthen the document modeling and help to extract relevant documents; (ii) propose a multi-topic dynamic memory network to capture the sequential patterns of different topics belonging to the event of interest and temporally memorize the input facts from the evolving document stream, avoiding extracting redundant information at each time step; (iii) consider both historical dependencies and future uncertainty of the document stream for generating relevant and timely summaries by exploiting the reinforcement learning technique. Experimental results on two real-world datasets have demonstrated the advantages of DRES model with significant improvement in generating relevant, non-redundant, and timely event summaries against the state-of-the-arts.


2020 ◽  
Vol 34 (01) ◽  
pp. 19-26 ◽  
Author(s):  
Chong Chen ◽  
Min Zhang ◽  
Yongfeng Zhang ◽  
Weizhi Ma ◽  
Yiqun Liu ◽  
...  

Recent studies on recommendation have largely focused on exploring state-of-the-art neural networks to improve the expressiveness of models, while typically apply the Negative Sampling (NS) strategy for efficient learning. Despite effectiveness, two important issues have not been well-considered in existing methods: 1) NS suffers from dramatic fluctuation, making sampling-based methods difficult to achieve the optimal ranking performance in practical applications; 2) although heterogeneous feedback (e.g., view, click, and purchase) is widespread in many online systems, most existing methods leverage only one primary type of user feedback such as purchase. In this work, we propose a novel non-sampling transfer learning solution, named Efficient Heterogeneous Collaborative Filtering (EHCF) for Top-N recommendation. It can not only model fine-grained user-item relations, but also efficiently learn model parameters from the whole heterogeneous data (including all unlabeled data) with a rather low time complexity. Extensive experiments on three real-world datasets show that EHCF significantly outperforms state-of-the-art recommendation methods in both traditional (single-behavior) and heterogeneous scenarios. Moreover, EHCF shows significant improvements in training efficiency, making it more applicable to real-world large-scale systems. Our implementation has been released 1 to facilitate further developments on efficient whole-data based neural methods.


Sign in / Sign up

Export Citation Format

Share Document