Hybrid Group Recommendation Using Modified Termite Colony Algorithm: A Context Towards Big Data

2018 ◽  
Vol 17 (02) ◽  
pp. 1850019 ◽  
Author(s):  
Arup Roy ◽  
Soumya Banerjee ◽  
Chintan Bhatt ◽  
Youakim Badr ◽  
Saurav Mallik

Since the introduction of Web 2.0, group recommendation systems become an effective tool for consulting and recommending items according to the choices of group of likeminded users. However, the population of dataset consisting of the large number of choices increases the size of storage. As a result, identification of the combination for specific recommendation becomes complex. Hence, the existing group recommendation system should support methodology for handling large data volume with varsity. In this paper, we propose a content-boosted modified termite colony optimisation-based rating prediction algorithm (CMTRP) for group recommendation system. CMTRP employs a hybrid recommendation framework with respect to the big data paradigm to deal with the trend of large data. The framework utilises the communal ratings that help to overcome the scalability problem. The experimental results reveal that CMTRP provides less error in the rating prediction and higher recommendation precision compared with the existing algorithms.

These days, Data volume has experienced enormous increase in volume, giving new challenges in technology and application. Data production has been expected at the rate of 2.5 Exabyte (1Ex-abyte=1.000.000Terabytes) of data per day. The main sources of data are: sensors collect climate information, traffic and flight information, social media sites (Twitter and Facebook are popular examples), digital pictures and videos (YouTube users upload 72 hours of new video content per minute), etc. Out of them social media becomes the prominent representative for the data source of big data. Social big data comes from the combination of social media and big data. Here, the data is mostly unstructured or semi-structured. The classical approaches, techniques, tools and frameworks for management of data have become insufficient for processing this huge volume of data and not capable for providing efficient solution to handle the increased production of data. The major challenge in data mining of big data is, its inadequate approaches to analyze massive amount of online data (or data streams). Specially, the field of sentiment analysis and predictive analysis has become so much promising area to place an organization at doom or at boom by provide accurate decision at accurate time. The current paper provides an insight of machine learning algorithm both supervised and unsupervised method; and the traditional knowledge extraction process. The application field of sentiment analysis, the issues those are faced during data collection and cleaning. This study flourishes a complete picture of recommendation system based on the sentiment analysis of events. The key motivation of the paper is to incorporate the event sentiment analysis and give the feedback and recommendation and illustrate the ongoing researches in the field of sentiment analysis and its application.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Julián Monsalve-Pulido ◽  
Jose Aguilar ◽  
Edwin Montoya ◽  
Camilo Salazar

This article proposes an architecture of an intelligent and autonomous recommendation system to be applied to any virtual learning environment, with the objective of efficiently recommending digital resources. The paper presents the architectural details of the intelligent and autonomous dimensions of the recommendation system. The paper describes a hybrid recommendation model that orchestrates and manages the available information and the specific recommendation needs, in order to determine the recommendation algorithms to be used. The hybrid model allows the integration of the approaches based on collaborative filter, content or knowledge. In the architecture, information is extracted from four sources: the context, the students, the course and the digital resources, identifying variables, such as individual learning styles, socioeconomic information, connection characteristics, location, etc. Tests were carried out for the creation of an academic course, in order to analyse the intelligent and autonomous capabilities of the architecture.


2019 ◽  
Vol 26 ◽  
pp. 03002
Author(s):  
Tilei Gao ◽  
Ming Yang ◽  
Rong Jiang ◽  
Yu Li ◽  
Yao Yao

The emergence of big data has brought a great impact on traditional computing mode, the distributed computing framework represented by MapReduce has become an important solution to this problem. Based on the big data, this paper deeply studies the principle and framework of MapReduce programming. On the basis of mastering the principle and framework of MapReduce programming, the time consumption of distributed computing framework MapReduce and traditional computing model is compared with concrete programming experiments. The experiment shows that MapReduce has great advantages in large data volume.


Author(s):  
Guohua Xiong

To ensure the high efficiency of the development of car networking technology, large data compression technology based on car networking was studied. First, RFID technology and vehicle networking, big data technology in vehicle networking, RFID path data compression technology in the Internet of vehicles were introduced. Then, RFID path data compression verification experiments were performed. The results showed that when the data volume was relatively small, there was no obvious change in the compression ratio under the fixed threshold and the threshold change. However, when the amount of data gradually increased, the compression ratio under the condition of changing the threshold was slightly higher than the fixed threshold. Therefore, RFID path big data processing is feasible, and compression technology is efficient.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Shaofeng Zhang ◽  
Wei Xiong ◽  
Wancheng Ni ◽  
Xin Li

Abstract Background his paper presents a case study on 100Credit, an Internet credit service provider in China. 100Credit began as an IT company specializing in e-commerce recommendation before getting into the credit rating business. The company makes use of Big Data on multiple aspects of individuals’ online activities to infer their potential credit risk. Methods Based on 100Credit’s business practices, this paper summarizes four aspects related to the value of Big Data in Internet credit services. Results 1) value from large data volume that provides access to more borrowers; 2) value from prediction correctness in reducing lenders’ operational cost; 3) value from the variety of services catering to different needs of lenders; and 4) value from information protection to sustain credit service businesses. Conclusion The paper also discusses the opportunities and challenges of Big Data-based credit risk analysis, which needs to be improved in future research and practice.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Liangshun Wu ◽  
Hengjin Cai

Big data is a term used for very large data sets. Digital equipment produces vast amounts of images every day; the need for image encryption is increasingly pronounced, for example, to safeguard the privacy of the patients’ medical imaging data in cloud disk. There is an obvious contradiction between the security and privacy and the widespread use of big data. Nowadays, the most important engine to provide confidentiality is encryption. However, block ciphering is not suitable for the huge data in a real-time environment because of the strong correlation among pixels and high redundancy; stream ciphering is considered a lightweight solution for ciphering high-definition images (i.e., high data volume). For a stream cipher, since the encryption algorithm is deterministic, the only thing you can do is to make the key “look random.” This article proves that the probability that the digit 1 appears in the midsection of a Zeckendorf representation is constant, which can be utilized to generate the pseudorandom numbers. Then, a novel stream cipher key generator (ZPKG) is proposed to encrypt high-definition images that need transferring. The experimental results show that the proposed stream ciphering method, with the keystream of which satisfies Golomb’s randomness postulates, is faster than RC4 and LSFR with indistinguishable performance on hardware depletion, and the method is highly key sensitive and shows good resistance against noise attacks and statistical attacks.


Sign in / Sign up

Export Citation Format

Share Document