Application and Research on Distributed Collaborative Filtering Recommendation Algorithm Based on Hadoop

2015 ◽  
Vol 713-715 ◽  
pp. 1615-1621
Author(s):  
Xiu Juan Li ◽  
He Biao Yang

Coupled with exponential expansion of the data, efficient computing of existing recommendation algorithm has become an important issue, and the traditional collaborative filtering recommendation algorithm also exist the problem of sparsity. Based on the detailed analysis, the article introduce Hadoop platform into improved collaborative filtering recommendation algorithm, the improved collaborative filtering recommendation algorithm solve the problem of data sparsity, MapReduce parallel computing of recommendation also solve the promble of computational efficiency. In the experiments, the comparative analysis between Hadoop platform implementation and the previous implementation draws the conclusion that the Hadoop platform improves collaborative filtering recommendation algorithm computation efficiently under conditions of large data sets.

2017 ◽  
Vol 4 ◽  
pp. 70-75
Author(s):  
Mateusz Gil

The article presents the possibilities of using Hadoop platform to manage large data sets. The development of application performance has been shown based on available sources. Additionally, the article describes the organizations that have been successful in the Internet thanks to the implemented software.


2019 ◽  
Vol 12 (1) ◽  
pp. 34-40
Author(s):  
Mareeswari Venkatachalaappaswamy ◽  
Vijayan Ramaraj ◽  
Saranya Ravichandran

Background: In many modern applications, information filtering is now used that exposes users to a collection of data. In such systems, the users are provided with recommended items’ list they might prefer or predict the rate that they might prefer for the items. So that, the users might be select the items that are preferred in that list. Objective: In web service recommendation based on Quality of Service (QoS), predicting QoS value will greatly help people to select the appropriate web service and discover new services. Methods: The effective method or technique for this would be Collaborative Filtering (CF). CF will greatly help in service selection and web service recommendation. It is the more general way of information filtering among the large data sets. In the narrower sense, it is the method of making predictions about a user’s interest by collecting taste information from many users. Results: It is easy to build and also much more effective for recommendations by predicting missing QoS values for the users. It also addresses the scalability problem since the recommendations are based on like-minded users using PCC or in clusters using KNN rather than in large data sources. Conclusion: In this paper, location-aware collaborative filtering is used to recommend the services. The proposed system compares the prediction outcomes and execution time with existing algorithms.


Author(s):  
Haley J Abel ◽  
Alun Thomas

We develop recent work on using graphical models for linkage disequilibrium to provide efficient programs for model fitting, phasing, and imputation of missing data in large data sets. Two important features contribute to the computational efficiency: the separation of the model fitting and phasing-imputation processes into different programs, and holding in memory only the data within a moving window of loci during model fitting. Optimal parameter values were chosen by cross-validation to maximize the probability of correctly imputing masked genotypes. The best accuracy obtained is slightly below than that from the Beagle program of Browning and Browning, and our fitting program is slower. However, for large data sets, it uses less storage. For a reference set of n individuals genotyped at m markers, the time and storage required for fitting a graphical model are approximately O(nm) and O(n+m), respectively. To impute the phases and missing data on n individuals using an already fitted graphical model requires O(nm) time and O(m) storage. While the times for fitting and imputation are both O(nm), the imputation process is considerably faster; thus, once a model is estimated from a reference data set, the marginal cost of phasing and imputing further samples is very low.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document