scholarly journals A Novel Hybrid Similarity Calculation Model

2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaoping Fan ◽  
Zhijie Chen ◽  
Liangkun Zhu ◽  
Zhifang Liao ◽  
Bencai Fu

This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Xiushan Zhang

Based on the understanding and comparison of various main recommendation algorithms, this paper focuses on the collaborative filtering algorithm and proposes a collaborative filtering recommendation algorithm with improved user model. Firstly, the algorithm considers the score difference caused by different user scoring habits when expressing preferences and adopts the decoupling normalization method to normalize the user scoring data; secondly, considering the forgetting shift of user interest with time, the forgetting function is used to simulate the forgetting law of score, and the weight of time forgetting is introduced into user score to improve the accuracy of recommendation; finally, the similarity calculation is improved when calculating the nearest neighbor set. Based on the Pearson similarity calculation, the effective weight factor is introduced to obtain a more accurate and reliable nearest neighbor set. The algorithm establishes an offline user model, which makes the algorithm have better recommendation efficiency. Two groups of experiments were designed based on the mean absolute error (MAE). One group of experiments tested the parameters in the algorithm, and the other group of experiments compared the proposed algorithm with other algorithms. The experimental results show that the proposed method has better performance in recommendation accuracy and recommendation efficiency.


Entropy ◽  
2019 ◽  
Vol 21 (2) ◽  
pp. 205 ◽  
Author(s):  
Shanyun Liu ◽  
Yunquan Dong ◽  
Pingyi Fan ◽  
Rui She ◽  
Shuo Wan

This paper focuses on the problem of finding a particular data recommendation strategy based on the user preference and a system expected revenue. To this end, we formulate this problem as an optimization by designing the recommendation mechanism as close to the user behavior as possible with a certain revenue constraint. In fact, the optimal recommendation distribution is the one that is the closest to the utility distribution in the sense of relative entropy and satisfies expected revenue. We show that the optimal recommendation distribution follows the same form as the message importance measure (MIM) if the target revenue is reasonable, i.e., neither too small nor too large. Therefore, the optimal recommendation distribution can be regarded as the normalized MIM, where the parameter, called importance coefficient, presents the concern of the system and switches the attention of the system over data sets with different occurring probability. By adjusting the importance coefficient, our MIM based framework of data recommendation can then be applied to systems with various system requirements and data distributions. Therefore, the obtained results illustrate the physical meaning of MIM from the data recommendation perspective and validate the rationality of MIM in one aspect.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Hailong Chen ◽  
Haijiao Sun ◽  
Miao Cheng ◽  
Wuyue Yan

Collaborative filtering recommendation algorithm is one of the most researched and widely used recommendation algorithms in personalized recommendation systems. Aiming at the problem of data sparsity existing in the traditional collaborative filtering recommendation algorithm, which leads to inaccurate recommendation accuracy and low recommendation efficiency, an improved collaborative filtering algorithm is proposed in this paper. The algorithm is improved in the following three aspects: firstly, considering that the traditional scoring similarity calculation excessively relies on the common scoring items, the Bhattacharyya similarity calculation is introduced into the traditional calculation formula; secondly, the trust weight is added to accurately calculate the direct trust value and the trust transfer mechanism is introduced to calculate the indirect trust value between users; finally, the user similarity and user trust are integrated, and the prediction result is generated by the trust weighting method. Experiments show that the proposed algorithm can effectively improve the prediction accuracy of recommendations.


2010 ◽  
Vol 23 (2) ◽  
pp. 025601 ◽  
Author(s):  
Monodeep Chakraborty ◽  
A N Das ◽  
Atisdipankar Chakrabarti

2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Changrim Ahn ◽  
Matthias Staudacher

Abstract We refine the notion of eclectic spin chains introduced in [1] by including a maximal number of deformation parameters. These models are integrable, nearest-neighbor n-state spin chains with exceedingly simple non-hermitian Hamiltonians. They turn out to be non-diagonalizable in the multiparticle sector (n > 2), where their “spectrum” consists of an intricate collection of Jordan blocks of arbitrary size and multiplicity. We show how and why the quantum inverse scattering method, sought to be universally applicable to integrable nearest-neighbor spin chains, essentially fails to reproduce the details of this spectrum. We then provide, for n=3, detailed evidence by a variety of analytical and numerical techniques that the spectrum is not “random”, but instead shows surprisingly subtle and regular patterns that moreover exhibit universality for generic deformation parameters. We also introduce a new model, the hypereclectic spin chain, where all parameters are zero except for one. Despite the extreme simplicity of its Hamiltonian, it still seems to reproduce the above “generic” spectra as a subset of an even more intricate overall spectrum. Our models are inspired by parts of the one-loop dilatation operator of a strongly twisted, double-scaled deformation of $$ \mathcal{N} $$ N = 4 Super Yang-Mills Theory.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Naotomo Takemura ◽  
Kenta Takata ◽  
Masato Takiguchi ◽  
Masaya Notomi

AbstractThe Kuramoto model is a mathematical model for describing the collective synchronization phenomena of coupled oscillators. We theoretically demonstrate that an array of coupled photonic crystal lasers emulates the Kuramoto model with non-delayed nearest-neighbor coupling (the local Kuramoto model). Our novel strategy employs indirect coupling between lasers via additional cold cavities. By installing cold cavities between laser cavities, we avoid the strong coupling of lasers and realize ideal mutual injection-locking with effective non-delayed dissipative coupling. First, after discussing the limit cycle interpretation of laser oscillation, we demonstrate the synchronization of two indirectly coupled lasers by numerically simulating coupled-mode equations. Second, by performing a phase reduction analysis, we show that laser dynamics in the proposed device can be mapped to the local Kuramoto model. Finally, we briefly demonstrate that a chain of indirectly coupled photonic crystal lasers actually emulates the one-dimensional local Kuramoto chain. We also argue that our proposed structure, which consists of periodically aligned cold cavities and laser cavities, will best be realized by using state-of-the-art buried multiple quantum well photonic crystals.


2020 ◽  
pp. 1-17
Author(s):  
Francisco Javier Balea-Fernandez ◽  
Beatriz Martinez-Vega ◽  
Samuel Ortega ◽  
Himar Fabelo ◽  
Raquel Leon ◽  
...  

Background: Sociodemographic data indicate the progressive increase in life expectancy and the prevalence of Alzheimer’s disease (AD). AD is raised as one of the greatest public health problems. Its etiology is twofold: on the one hand, non-modifiable factors and on the other, modifiable. Objective: This study aims to develop a processing framework based on machine learning (ML) and optimization algorithms to study sociodemographic, clinical, and analytical variables, selecting the best combination among them for an accurate discrimination between controls and subjects with major neurocognitive disorder (MNCD). Methods: This research is based on an observational-analytical design. Two research groups were established: MNCD group (n = 46) and control group (n = 38). ML and optimization algorithms were employed to automatically diagnose MNCD. Results: Twelve out of 37 variables were identified in the validation set as the most relevant for MNCD diagnosis. Sensitivity of 100%and specificity of 71%were achieved using a Random Forest classifier. Conclusion: ML is a potential tool for automatic prediction of MNCD which can be applied to relatively small preclinical and clinical data sets. These results can be interpreted to support the influence of the environment on the development of AD.


Author(s):  
Wei Peng ◽  
Baogui Xin

AbstractA recommendation can inspire potential demands of users and make e-commerce platforms more intelligent and is essential for e-commerce enterprises’ sustainable development. The traditional social recommendation algorithm ignores the following fact: the preferences of users with trust relationships are not necessarily similar, and the consideration of user preference similarity should be limited to specific areas. To solve these problems mentioned above, we propose a social trust and preference segmentation-based matrix factorization (SPMF) recommendation algorithm. Experimental results based on the Ciao and Epinions datasets show that the accuracy of the SPMF algorithm is significantly superior to that of some state-of-the-art recommendation algorithms. The SPMF algorithm is a better recommendation algorithm based on distinguishing the difference of trust relations and preference domain, which can support commercial activities such as product marketing.


Author(s):  
Chen Lin ◽  
Xiaolin Shen ◽  
Si Chen ◽  
Muhua Zhu ◽  
Yanghua Xiao

The study of consumer psychology reveals two categories of consumption decision procedures: compensatory rules and non-compensatory rules. Existing recommendation models which are based on latent factor models assume the consumers follow the compensatory rules, i.e. they evaluate an item over multiple aspects and compute a weighted or/and summated score which is used to derive the rating or ranking of the item. However, it has been shown in the literature of consumer behavior that, consumers adopt non-compensatory rules more often than compensatory rules. Our main contribution in this paper is to study the unexplored area of utilizing non-compensatory rules in recommendation models.Our general assumptions are (1) there are K universal hidden aspects. In each evaluation session, only one aspect is chosen as the prominent aspect according to user preference. (2) Evaluations over prominent and non-prominent aspects are non-compensatory. Evaluation is mainly based on item performance on the prominent aspect. For non-prominent aspects the user sets a minimal acceptable threshold. We give a conceptual model for these general assumptions. We show how this conceptual model can be realized in both pointwise rating prediction models and pair-wise ranking prediction models. Experiments on real-world data sets validate that adopting non-compensatory rules improves recommendation performance for both rating and ranking models.


2014 ◽  
Vol 687-691 ◽  
pp. 3861-3868
Author(s):  
Zheng Hong Deng ◽  
Li Tao Jiao ◽  
Li Yan Liu ◽  
Shan Shan Zhao

According to the trend of the intelligent monitoring system, on the basis of the study of gait recognition algorithm, the intelligent monitoring system is designed based on FPGA and DSP; On the one hand, FPGA’s flexibility and fast parallel processing algorithms when designing can be both used to avoid that circuit can not be modified after designed; On the other hand, the advantage of processing the digital signal of DSP is fully taken. In the feature extraction and recognition, Zernike moment is selected, at the same time the system uses the nearest neighbor classification method which is more mature and has good real-time performance. Experiments show that the system has high recognition rate.


Sign in / Sign up

Export Citation Format

Share Document