Holistic Transfer to Rank for Top-N Recommendation

2021 ◽  
Vol 11 (1) ◽  
pp. 1-1
Author(s):  
Wanqi Ma ◽  
Xiaoxiao Liao ◽  
Wei Dai ◽  
Weike Pan ◽  
Zhong Ming

Recommender systems have been a valuable component in various online services such as e-commerce and entertainment. To provide an accurate top-N recommendation list of items for each target user, we have to answer a very basic question of how to model users’ feedback effectively. In this article, we focus on studying users’ explicit feedback, which is usually assumed to contain more preference information than the counterpart, i.e., implicit feedback. In particular, we follow two very recent transfer to rank algorithms by converting the original feedback to three different but related views of examinations, scores, and purchases, and then propose a novel solution called holistic transfer to rank (HoToR), which is able to address the uncertainty challenge and the inconvenience challenge in the existing works. More specifically, we take the rating scores as a weighting strategy to alleviate the uncertainty of the examinations, and we design a holistic one-stage solution to address the inconvenience of the two/three-stage training and prediction procedures in previous works. We then conduct extensive empirical studies in a direct comparison with the two closely related transfer learning algorithms and some very competitive factorization- and neighborhood-based methods on three public datasets and find that our HoToR performs significantly better than the other methods in terms of several ranking-oriented evaluation metrics.

2000 ◽  
Vol 5 (1) ◽  
pp. 44-51 ◽  
Author(s):  
Peter Greasley

It has been estimated that graphology is used by over 80% of European companies as part of their personnel recruitment process. And yet, after over three decades of research into the validity of graphology as a means of assessing personality, we are left with a legacy of equivocal results. For every experiment that has provided evidence to show that graphologists are able to identify personality traits from features of handwriting, there are just as many to show that, under rigorously controlled conditions, graphologists perform no better than chance expectations. In light of this confusion, this paper takes a different approach to the subject by focusing on the rationale and modus operandi of graphology. When we take a closer look at the academic literature, we note that there is no discussion of the actual rules by which graphologists make their assessments of personality from handwriting samples. Examination of these rules reveals a practice founded upon analogy, symbolism, and metaphor in the absence of empirical studies that have established the associations between particular features of handwriting and personality traits proposed by graphologists. These rules guide both popular graphology and that practiced by professional graphologists in personnel selection.


2021 ◽  
Vol 3 (2) ◽  
pp. 66-72
Author(s):  
Riad Taufik Lazwardi ◽  
Khoirul Umam

The analysis used in this study uses the help of Google Analytics to understand how the user's behavior on the Calculus learning material educational website page. Are users interested in recommendation articles? The answer to this question provides insight into the user's decision process and suggests how far a click is the result of an informed decision. Based on these results, it is hoped that a strategy to generate feedback from clicks should emerge. To evaluate the extent to which feedback shows relevance, versus implicit feedback to explicit feedback collected manually. The study presented in this study differs in at least two ways from previous work assessing the reliability of implicit feedback. First, this study aims to provide detailed insight into the user decision-making process through the use of a recommendation system with an implicit feedback feature. Second, evaluate the relative preferences that come from user behavior (user behavior). This differs from previous studies which primarily assessed absolute feedback. 


The Forum ◽  
2016 ◽  
Vol 14 (2) ◽  
Author(s):  
R. Shep Melnick

AbstractOver the past half century no judicial politics scholar has been more respected or influential than Martin Shapiro. Yet it is hard to identify a school of thought one could call “Shapiroism.” Rather than offer convenient methodologies or grand theories, Shapiro provides rich empirical studies that show us how to think about the relationship between law and courts on the one hand and politics and governing on the other. Three key themes run through Shapiro’s impressive oevre. First, rather than study courts in isolation, political scientists should view them as “one government agency among many,” and seek to “integrate the judicial system in the matrix of government and politics in which it actually operates.” Law professors may understand legal doctrines better than political scientists, but we know (or should know) the rest of the political system better than they do. Second, although judges inevitably make political decisions, their institutional environment leads them to act differently from other public officials. Most importantly, their legitimacy rests on their perceived impartiality within the plaintiff-defendant-judge triad. The conflict between judges’ role as impartial arbiter and enforcer of the laws of the regime can never be completely resolved and places powerful constraints on their actions. Third, the best way to understand the complex relationship between courts and other elements of the regime is comparative analysis. Shapiro played a major role in resuscitating comparative law, especially in his work comparing the US and the EU. All this he did with a rare combination of thick description and crisp, jargon-free analysis, certainly a rarity the political science of our time.


2020 ◽  
Vol 5 ◽  
pp. 21-30
Author(s):  
Oksana Chala ◽  
Lyudmyla Novikova ◽  
Larysa Chernyshova ◽  
Angelika Kalnitskaya

The problem of identifying shilling attacks, which are aimed at forming false ratings of objects in the recommender system, is considered. The purpose of such attacks is to include in the recommended list of items the goods specified by the attacking user. The recommendations obtained as a result of the attack will not correspond to customers' real preferences, which can lead to distrust of the recommender system and a drop in sales. The existing methods for detecting shilling attacks use explicit feedback from the user and are focused primarily on building patterns that describe the key characteristics of the attack. However, such patterns only partially take into account the dynamics of user interests. A method for detecting shilling attacks using implicit feedback is proposed by comparing the temporal description of user selection processes and ratings. Models of such processes are formed using a set of weighted temporal rules that define the relationship in time between the moments when users select a given object. The method uses time-ordered input data. The method includes the stages of forming sets of weighted temporal rules for describing sales processes and creating ratings, calculating a set of ratings for these processes, and forming attack indicators based on a comparison of the ratings obtained. The resulting signs make it possible to distinguish between nuke and push attacks. The method is designed to identify discrepancies in the dynamics of purchases and ratings, even in the absence of rating values at certain time intervals. The technique makes it possible to identify an approach to masking an attack based on a comparison of the rating values and the received attack indicators. When applied iteratively, the method allows to refine the list of profiles of potential attackers. The technique can be used in conjunction with pattern-oriented approaches to identifying shilling attacks


Numerous empirical studies demonstrate the superiority of dynamic strategies with a volatility-weighting-over-time mechanism. These strategies control the portfolio risk over time by adjusting the risk exposure according to updated volatility forecasts. Yet, to reap all the benefits promised by volatility weighting over time, the composition of the active portfolio must be revised rather frequently. Transaction costs represent a serious obstacle to benefiting from this dynamic risk control technique. In this article, we propose a modified volatility-weighting strategy that allows one to reduce dramatically the amount of trading costs. The empirical evidence shows that the advantages of the modified volatility-weighting strategy persist even in the presence of high transaction costs.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1475 ◽  
Author(s):  
Hongjun Wang ◽  
Zhen Yang ◽  
Yingchun Shi

As an emerging class of spatial trajectory data, mobile user trajectory data can be used to analyze individual or group behavioral characteristics, hobbies and interests. Besides, the information extracted from original trajectory data is widely used in smart cities, transportation planning, and anti-terrorism maintenance. In order to identify the important locations of the target user from his trajectory data, a novel division method for preprocessing trajectory data is proposed, the feature points of original trajectory are extracted according to the change of trajectory structural, and then important locations are extracted by clustering the feature points, using an improved density peak clustering algorithm. Finally, in order to predict next location of mobile users, a multi-order fusion Markov model based on the Adaboost algorithm is proposed, the model order k is adaptively determined, and the weight coefficients of the 1~k-order models are given by the Adaboost algorithm according to the importance of various order models, a multi-order fusion Markov model is generated to predict next important location of the user. The experimental results on the real user trajectory dataset Geo-life show that the prediction performance of Adaboost-Markov model is better than the multi-order fusion Markov model with equal coefficient, and the universality and prediction performance of Adaboost-Markov model is better than the first to third order Markov models.


2015 ◽  
Vol 42 (12) ◽  
pp. 1090-1105 ◽  
Author(s):  
Roslina Kamaruddin ◽  
Amir Hussin Baharuddin

Purpose – The purpose of this paper is to identify the level of good aquaculture practice (GAqP) among aquaculture farmers; and to analyse the factors influence the level of practice and the importance of GAqP in increasing farmer’s income. Design/methodology/approach – Primary data were obtained through a survey conducted on 216 aquaculture pond fish farmers. The descriptive study was employed to identify the profile of respondents and their level of GAqP practices. The structural equation modelling (SEM) method was applied to analyse the factors influence the level of GAqP practice, and the influence of GAqP on the total income of aquaculture farmers. Findings – The results showed that the pond management by brackish water fish farmers is better than freshwater fish farmer, indicated by 77 per cent of them adopt GAqP at a level of 60 per cent and above, as compared to only 20 per cent by freshwater farmers. Physical and human assets were revealed to be most significant factors influence the practice of GAqP. The results also proved that GAqP was among the significant factor contributes to increasing in farmers’ household income; in addition to their other livelihood assets. Originality/value – To the best of the author’s knowledge, this is the very first study that employs SEM method to analyse the relationship between GAqP with livelihood asset and farmer’s income simultaneously in Malaysia. Furthermore, since the empirical studies related to GAqP is very few, the study will contribute to development of knowledge in the field of aquaculture.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Ying Jin ◽  
Guangming Cui ◽  
Yiwen Zhang

Service-oriented architecture (SOA) is widely used, which has fueled the rapid growth of Web services and the deployment of tremendous Web services over the last decades. It becomes challenging but crucial to find the proper Web services because of the increasing amount of Web services. However, it proves unfeasible to inspect all the Web services to check their quality values since it will consume a lot of resources. Thus, developing effective and efficient approaches for predicting the quality values of Web services has become an important research issue. In this paper, we propose UIQPCA, a novel approach for hybrid User and Item-based Quality Prediction with Covering Algorithm. UIQPCA integrates information of both users and Web services on the basis of users’ ideas on the quality of coinvoked Web services. After the integration, users and Web services which are similar to the target user and the target Web service are selected. Then, considering the result of integration, UIQPCA makes predictions on how a target user will appraise a target Web service. Broad experiments on WS-Dream, a web service dataset which is widely used in real world, are conducted to evaluate the reliability of UIQPCA. According to the results of experiment, UIQPCA is far better than former approaches, including item-based, user-based, hybrid, and cluster-based approaches.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 554
Author(s):  
Tanveer Akhlaq ◽  
Muhammad Ismail ◽  
Muhammad Qaiser Shahbaz

Variability or dispersion plays an important role in any process and provides insight into the spread of data from some central point, usually the mean. A process with less spread is preferred over a process in which values differ greatly from the mean. Various methods are available to estimate the process dispersion by using information on the variable of interest. Certain additional variables provide good insight to estimate the process dispersion. In this paper, we propose an efficient method for the estimation of process variability by using the exponential method. The properties of the proposed method were studied. We conducted simulation and empirical studies to compare the proposed method with some existing methods of estimation of variability. The results of the numerical study show that our proposed method is better than the other methods used in the study.


Author(s):  
Heyong Wang ◽  
◽  
Ming Hong ◽  
Jinjiong Lan

The traditional collaborative filtering model suffers from high-dimensional sparse user rating information and ignores user preference information contained in user reviews. To address the problem, this paper proposes a new collaborative filtering model UL_SAM (UBCF_LDA_SIMILAR_ADD_MEAN) which integrates topic model with user-based collaborative filtering model. UL_SAM extracts user preference information from user reviews through topic model and then fuses user preference information with user rating information by similarity fusion method to create fusion information. UL_SAM creates collaborative filtering recommendations according to fusion information. It is the advantage of UL_SAM on improving recommendation effectiveness that UL_SAM enriches information for collaborative recommendation by integrating user preference with user rating information. Experimental results of two public datasets demonstrate significant improvement on recommendation effectiveness in our model.


Sign in / Sign up

Export Citation Format

Share Document