scholarly journals Improving cold-start recommendations using item-based stereotypes

Author(s):  
Nourah AlRossais ◽  
Daniel Kudenko ◽  
Tommy Yuan

AbstractRecommender systems (RSs) have become key components driving the success of e-commerce and other platforms where revenue and customer satisfaction is dependent on the user’s ability to discover desirable items in large catalogues. As the number of users and items on a platform grows, the computational complexity and the sparsity problem constitute important challenges for any recommendation algorithm. In addition, the most widely studied filtering-based RSs, while effective in providing suggestions for established users and items, are known for their poor performance for the new user and new item (cold-start) problems. Stereotypical modelling of users and items is a promising approach to solving these problems. A stereotype represents an aggregation of the characteristics of the items or users which can be used to create general user or item classes. We propose a set of methodologies for the automatic generation of stereotypes to address the cold-start problem. The novelty of the proposed approach rests on the findings that stereotypes built independently of the user-to-item ratings improve both recommendation metrics and computational performance during cold-start phases. The resulting RS can be used with any machine learning algorithm as a solver, and the improved performance gains due to rate-agnostic stereotypes are orthogonal to the gains obtained using more sophisticated solvers. The paper describes how such item-based stereotypes can be evaluated via a series of statistical tests prior to being used for recommendation. The proposed approach improves recommendation quality under a variety of metrics and significantly reduces the dimension of the recommendation model.

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Xiaocong Ai ◽  
Georgiana Mania ◽  
Heather M. Gray ◽  
Michael Kuhn ◽  
Nicholas Styles

AbstractComputing centres, including those used to process High-Energy Physics data and simulations, are increasingly providing significant fractions of their computing resources through hardware architectures other than x86 CPUs, with GPUs being a common alternative. GPUs can provide excellent computational performance at a good price point for tasks that can be suitably parallelized. Charged particle (track) reconstruction is a computationally expensive component of HEP data reconstruction, and thus needs to use available resources in an efficient way. In this paper, an implementation of Kalman filter-based track fitting using CUDA and running on GPUs is presented. This utilizes the ACTS (A Common Tracking Software) toolkit; an open source and experiment-independent toolkit for track reconstruction. The implementation details and parallelization approach are described, along with the specific challenges for such an implementation. Detailed performance benchmarking results are discussed, which show encouraging performance gains over a CPU-based implementation for representative configurations. Finally, a perspective on the challenges and future directions for these studies is outlined. These include more complex and realistic scenarios which can be studied, and anticipated developments to software frameworks and standards which may open up possibilities for greater flexibility and improved performance.


Author(s):  
Qiaoling Zhou

PurposeEnglish original movies played an important role in English learning and communication. In order to find the required movies for us from a large number of English original movies and reviews, this paper proposed an improved deep reinforcement learning algorithm for the recommendation of movies. In fact, although the conventional movies recommendation algorithms have solved the problem of information overload, they still have their limitations in the case of cold start-up and sparse data.Design/methodology/approachTo solve the aforementioned problems of conventional movies recommendation algorithms, this paper proposed a recommendation algorithm based on the theory of deep reinforcement learning, which uses the deep deterministic policy gradient (DDPG) algorithm to solve the cold starting and sparse data problems and uses Item2vec to transform discrete action space into a continuous one. Meanwhile, a reward function combining with cosine distance and Euclidean distance is proposed to ensure that the neural network does not converge to local optimum prematurely.FindingsIn order to verify the feasibility and validity of the proposed algorithm, the state of the art and the proposed algorithm are compared in indexes of RMSE, recall rate and accuracy based on the MovieLens English original movie data set for the experiments. Experimental results have shown that the proposed algorithm is superior to the conventional algorithm in various indicators.Originality/valueApplying the proposed algorithm to recommend English original movies, DDPG policy produces better recommendation results and alleviates the impact of cold start and sparse data.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Peter Morales ◽  
Rajmonda Sulo Caceres ◽  
Tina Eliassi-Rad

AbstractComplex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem as a sequential decision-making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called network actor critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. The NAC paradigm utilizes a task-specific network embedding to reduce the state space complexity. A detailed comparative analysis of popular network embeddings is presented with respect to their role in supporting offline planning. Furthermore, a quantitative study is presented on various synthetic and real benchmarks using NAC and several baselines. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. Finally, we outline learning regimes where planning is critical in addressing sparse and changing reward signals.


Author(s):  
Marcelo B. Ularte

Science teachers are optimistic that every student can learn so much with high hopes and dreams. They plan their lessons and work hard to engage their students. However, despite good intentions and best laid plans, not all students perform well in Science classes. Student’s performance is very alarming on the part of the teachers. Students are unable to understand scientific issues that affect their lives in today’s fast changing world. Several studies in the past reflected that Science lessons were recorded as of low quality. (American Association for the Advancement of Science, 1989). Many Science students sit passively, never being asked to make sense of the content that teachers deliver. There are many concepts and activities in Science that students ignore and fail to develop. With the Enhanced Basic Education Curriculum or the K12 curriculum, students record in periodical tests and in the National Achievement Test and National Career Assessment Test are very low and elicited poor performance. Relative to this, Bilaran Science Teachers are alarmed with the situation. Improved performance of students must be worked hand in hand by Science Teachers. Intervention programs in classes must be applied too in daily teaching engagement, thus, there is a need to strengthen the Science instruction. This study primarily focuses on the status of Science instruction and to propose a development plan in Science.


2021 ◽  
pp. 617-624
Author(s):  
Wenan Tan ◽  
Xin Zhou ◽  
Xiao Zhang ◽  
Xiaojuan Cai ◽  
Weinan Niu

BMJ Open ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. e025925 ◽  
Author(s):  
Christopher J McWilliams ◽  
Daniel J Lawson ◽  
Raul Santos-Rodriguez ◽  
Iain D Gilchrist ◽  
Alan Champneys ◽  
...  

ObjectiveThe primary objective is to develop an automated method for detecting patients that are ready for discharge from intensive care.DesignWe used two datasets of routinely collected patient data to test and improve on a set of previously proposed discharge criteria.SettingBristol Royal Infirmary general intensive care unit (GICU).PatientsTwo cohorts derived from historical datasets: 1870 intensive care patients from GICU in Bristol, and 7592 from Medical Information Mart for Intensive Care (MIMIC)-III.ResultsIn both cohorts few successfully discharged patients met all of the discharge criteria. Both a random forest and a logistic classifier, trained using multiple-source cross-validation, demonstrated improved performance over the original criteria and generalised well between the cohorts. The classifiers showed good agreement on which features were most predictive of readiness-for-discharge, and these were generally consistent with clinical experience. By weighting the discharge criteria according to feature importance from the logistic model we showed improved performance over the original criteria, while retaining good interpretability.ConclusionsOur findings indicate the feasibility of the proposed approach to ready-for-discharge classification, which could complement other risk models of specific adverse outcomes in a future decision support system. Avenues for improvement to produce a clinically useful tool are identified.


Author(s):  
Carles Gelada ◽  
Marc G. Bellemare

In this paper we revisit the method of off-policy corrections for reinforcement learning (COP-TD) pioneered by Hallak et al. (2017). Under this method, online updates to the value function are reweighted to avoid divergence issues typical of off-policy learning. While Hallak et al.’s solution is appealing, it cannot easily be transferred to nonlinear function approximation. First, it requires a projection step onto the probability simplex; second, even though the operator describing the expected behavior of the off-policy learning algorithm is convergent, it is not known to be a contraction mapping, and hence, may be more unstable in practice. We address these two issues by introducing a discount factor into COP-TD. We analyze the behavior of discounted COP-TD and find it better behaved from a theoretical perspective. We also propose an alternative soft normalization penalty that can be minimized online and obviates the need for an explicit projection step. We complement our analysis with an empirical evaluation of the two techniques in an off-policy setting on the game Pong from the Atari domain where we find discounted COP-TD to be better behaved in practice than the soft normalization penalty. Finally, we perform a more extensive evaluation of discounted COP-TD in 5 games of the Atari domain, where we find performance gains for our approach.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4308 ◽  
Author(s):  
Xiang Zhang ◽  
Wei Yang ◽  
Xiaolin Tang ◽  
Jie Liu

To improve the accuracy of lane detection in complex scenarios, an adaptive lane feature learning algorithm which can automatically learn the features of a lane in various scenarios is proposed. First, a two-stage learning network based on the YOLO v3 (You Only Look Once, v3) is constructed. The structural parameters of the YOLO v3 algorithm are modified to make it more suitable for lane detection. To improve the training efficiency, a method for automatic generation of the lane label images in a simple scenario, which provides label data for the training of the first-stage network, is proposed. Then, an adaptive edge detection algorithm based on the Canny operator is used to relocate the lane detected by the first-stage model. Furthermore, the unrecognized lanes are shielded to avoid interference in subsequent model training. Then, the images processed by the above method are used as label data for the training of the second-stage model. The experiment was carried out on the KITTI and Caltech datasets, and the results showed that the accuracy and speed of the second-stage model reached a high level.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Yue Tan ◽  
Jing Li ◽  
Yuan Li ◽  
Chunbao Liu

An approach was presented to improve the performance prediction of marine propeller through computational fluid dynamics (CFD). After a series of computations were conducted, it was found that the passage in the former study was too narrow, resulting in the unnecessary radial outer boundary effects. Hence, in this study, a fatter passage model was employed to avoid unnecessary effects, in which the diameter was the same as the length from the propeller to the downstream outlet and the diameter was larger than the previous study. The diameter and length of the passage were 5D and 8D, respectively. The propeller DTMB P5168 was used to evaluate the fat passage model. During simulation, the classical RANS model (standard k-ε) and the Multiple Reference Frame (MRF) approach were employed after accounting for other factors. The computational performance results were compared with the experimental values, which showed that they were in good agreement. The maximum errors of Kt and Kq were less than 5% and 3% on different advance coefficients J except 1.51, respectively, and that of η was less than 2.62%. Hence the new model obtains more accurate performance prediction compared with published literatures. The circumferentially averaged velocity components were also compared with the experimental results. The axial and tangential velocity components were also in good agreement with the experimental data. Specifically, the errors of the axial and tangential velocity components were less than 3%, when the r/R was not less than 3.4. When the J value was larger, the variation trends of radial velocity were consistent with the experimental data. In conclusion, the fat passage model proposed here was applicable to obtain the highly accurate predicted results.


Sign in / Sign up

Export Citation Format

Share Document