scholarly journals Prediction of E-Commerce Product Ratings Based on Similar Users

2021 ◽  
Vol 10 (6) ◽  
pp. 25347-25351
Author(s):  
Shashank Pola ◽  
Venkatesh M ◽  
Ravi Chandra Reddy K ◽  
Indira Priyadarsini P

Together with the fast advancement of continuous expansion and the Internet of E-commerce scope, product quantity, as well as assortment, boost fast. Merchants offer many goods via going shopping customers and websites generally consider a huge amount of moment to discover the products of theirs.Within e-commerce sites, the item rating is among the primary key ingredients of an excellent pc user expertise. Many methods are working with whose users to consider the goods they wish. A comparable item suggestion is among the favorite modes working with whose customers look for items in line with the item scores. In general, the suggestions aren't personalized to a particular pc user. Exploring a great deal of solutions tends to make customers runoff as a result of the info clog but not offering proper reviews for solutions.Traditional algorithms has data sparsity and cold start issues. To overcome these problems we use cosine similarity method to identify the similarity between those vectors. The nearest similar vector ratings will be used during the estimation of the unknown ratings.The proposed methodology records ratings of each product from users and those are represented by a vector, and the cosine similarity is used a measure to identify the similarity between those vectors. The nearest similar vector ratings will be used during the estimation of the unknown ratings.Hence, By using the above approach it can overcome the above problems and also it can achieve high efficiency and accuracy in a simple manner.

Author(s):  
Rosihan Ari Yuana ◽  
Dewanto Harjunowibowo ◽  
Nugraha Arif Karyanta ◽  
Cucuk Wawan Budiyanto

Wartegg test is a widely adopted personality evaluation instrument known for its drawing completion technique.  Employee personality data, for instance, can be sorted by the closest similarity with the expected characters. Whereas, Wartegg test plays a significant role in data similarity filtering. Despite the potential contribution of personal characters identification technique, practical guidance is rarely found in the literature. This paper demonstrates the usage of cosine-similarity method for data similarity filtering on Wartegg personality test. The method used in this study is a case study, in which will be selected several Wartegg test subjects. By using the value of each character aspect derived from the Wartegg test, the cosine-similarity value will be calculated against the expected/ideal aspect character. Based on this value, the Wartegg test subjects will be filtered based on similarity to the expected/ideal character aspects. A technical procedure to perform the method is also presented in this paper. In order to find out the effectiveness, sample data scores of each character aspect from five test subjects, and also the ideal scores of the expected characters are given. By using FWAT, a graphical representation of the test subjects' characters to the ideal characters is generated. Then, this graph was compared to the results obtained from the cosine-similarity method. Drawn from the results, the cosine-similarity is effectively applied for Wartegg test data similarity filtering.


2021 ◽  
Vol 2021 ◽  
pp. 1-24
Author(s):  
Youness Mourtaji ◽  
Mohammed Bouhorma ◽  
Daniyal Alghazzawi ◽  
Ghadah Aldabbagh ◽  
Abdullah Alghamdi

The phenomenon of phishing has now been a common threat, since many individuals and webpages have been observed to be attacked by phishers. The common purpose of phishing activities is to obtain user’s personal information for illegitimate usage. Considering the growing intensity of the issue, this study is aimed at developing a new hybrid rule-based solution by incorporating six different algorithm models that may efficiently detect and control the phishing issue. The study incorporates 37 features extracted from six different methods including the black listed method, lexical and host method, content method, identity method, identity similarity method, visual similarity method, and behavioral method. Furthermore, comparative analysis was undertaken between different machine learning and deep learning models which includes CART (decision trees), SVM (support vector machines), or KNN ( K -nearest neighbors) and deep learning models such as MLP (multilayer perceptron) and CNN (convolutional neural networks). Findings of the study indicated that the method was effective in analysing the URL stress through different viewpoints, leading towards the validity of the model. However, the highest accuracy level was obtained for deep learning with the given values of 97.945 for the CNN model and 93.216 for the MLP model, respectively. The study therefore concludes that the new hybrid solution must be implemented at a practical level to reduce phishing activities, due to its high efficiency and accuracy.


2018 ◽  
Vol 7 (2.3) ◽  
pp. 746
Author(s):  
B Vishnu Priya ◽  
Dr JKR Sastry

Ability to transfer huge amount of content to the target is the present-day requirements of the users which is not being used through internet-based protocol due to static nature of the internet. Software defined networks (SDN) provides the flexibility to implement any architecture as the control and data plane are separated. Information / content centric networks (ICN / CCN) can be implemented using SDN. The requirement of the massive delivery of the content can be archived through ICN/CCN.In this paper a comparative analysis of the methods used for building information centric networking ICN / CCN over software defined networks has been presented. The areas of research that needs to be undertaken further have also been cited in the paper. 


2009 ◽  
pp. 152-153
Author(s):  
Rana Tassabehji ◽  
James Wallace ◽  
Anastasios Tsoularis

The Internet has reached a stage of maturity where its innovative adoption and implementation can be a source of competitive advantage. Supply chains are one of the areas that has reportedly benefited greatly, achieving optimisation through low cost, high efficiency use of the Internet, almost seamlessly linking global supply chains into e-supply networks. This field is still in its academic and practical infancy, and there is a need for more empirical research to build a robust theoretical foundation, which advances our knowledge and understanding. Here, the main aims and objectives are to highlight the importance of information flows in e-supply chains/networks, and the need for their standardisation to facilitate integration, legality, security, and efficiency of operations. This chapter contributes to the field by recommending a three-stage framework enabling this process through the development of standardised Internet technology platforms (e-platforms), integration requirements and classification of information flows.


Author(s):  
Jeanne Chen ◽  
Tung-Shou Chen ◽  
Meng-Wen Cheng

Great advancements in Web technology have resulted in increase activities on the Internet. Users from all walks of life — e-commerce traders, professionals and ordinary users — have become very dependent on the Internet for all sorts of data transfers, be it important data transactions or friendly exchanges. Therefore, data security measures on the Internet are very essential and important. Also, steganography plays a very important role for protecting the huge amount of data that pass through the internet daily.


2013 ◽  
Vol 347-350 ◽  
pp. 2993-2997
Author(s):  
Yue Li ◽  
Ran Liu

With the popularity and development of the network, the support of the high-performance computer technology becomes increasingly important as the huge information storage and the convenience of Information retrieval function of the internet that attracts more and more people join the netizens team. Therefore, I proposed an Information Processing Platform based on the high performance data mining in order to improve the Internet mass information intelligence parallel processing functions and the integrated development of the systems information storage, management, integration, intelligence processing, data mining and utilization. The propose of this system is to provide certain references and guidance for the technology implementation and realization of the high performance and high efficiency network massive Information Processing Platform as on the one hand, I have analyzed the key technology of the implementation of the platform, on the other hand briefly introduced the implementation of the RDIDC.


2010 ◽  
Vol 439-440 ◽  
pp. 859-864
Author(s):  
Ming Zhang ◽  
Jin Qiu Yang

In the past few years, there have been tremendous interest in the peer-to-peer(P2P) content delivery. This communication paradigm dramatically increases the traffic over inter-ISP links. In particular, BitTorrent(BT), the most popular P2P application, generates a huge amount of traffic on the Internet. BitTorrent’s performance is limited by the fact that typical internet users have much lower upload bandwidths than download bandwidths. This results in the overall average download speed of a BitTorrent-like file download system to be bottle-necked by the much lower upload capacity. We think about to utilize idle users’ spare upload capacity to largely improve the download speed beyond what can be achieved in a conventional BitTorrent network. In this paper, we design a system that is completely compatible with the already existing clients who conform to the BitTorrent protocol, at the same time, we analyze this system’s steady-state performance and present simulation results.


2015 ◽  
Vol 742 ◽  
pp. 721-725
Author(s):  
Xiao Qing Zhou ◽  
Jia Xiu Sun ◽  
Xing Xian Luo

With fast development and deep appliance of the Internet, problem of mass image data storage stand out, so the problem of low management efficiency, low storage ability and high cost of traditional storage framework has appeared. The appearance of Hadoop provides a new thought. However, Hadoop itself is not suit for the handle of small files. This paper puts forward a storage framework of mass image files based on Hadoop, and solved the internal storage bottleneck of NameNode when small files are excessive through classification algorithm of preprocessing module and lead-in of high efficiency and first-level of index mechanism. The test manifests that the system is safe, easy to defend and has fine extension quality; as a result, it can reach to a fine effect.


Nowadays there is much news on the internet. It makes the reader become information overload. The reader does not know the most important news for them. The digital era, especially in Indonesia, generated data in Bahasa very fast that referred to as big data. Data mining by process big data can collect the data insight that the reader already read. This paper proposes a new model to proceed with Bahasa news and use the TF-IDF method to collect the feature of the article. Cosine similarity from the news article used to rank the new unknown articles to recommend articles based on their preference. we can filtering the stream of information and highlight the most likely article they will read but based on their preference that we already collect implicitly from the article that they read it, it’s a scroll depth of the article they read.Then we can serve the news more personalized from what they love to read.


2019 ◽  
Vol 8 (1) ◽  
pp. 27-35
Author(s):  
Jans Hendry ◽  
Aditya Rachman ◽  
Dodi Zulherman

In this study, a system has been developed to help detect the accuracy of the reading of the Koran in the Surah Al-Kautsar based on the accuracy of the number and pronunciation of words in one complete surah. This system is very dependent on the accuracy of word segmentation based on envelope signals. The feature extraction method used was Mel Frequency Cepstrum Coefficients (MFCC), while the Cosine Similarity method was used to detect the accuracy of the reading. From 60 data, 30 data were used for training, while the rest were for testing. From each of the 30 training and test data, 15 data were correct readings, and 15 other data were incorrect readings. System accuracy was measured by word-for-word recognition, which results in 100 % of recall and 98.96 % of precision for the training word data, and 100 % of recall and 99.65 % of precision for the test word data. For the overall reading of the surah, there were 15 correct readings and 14 incorrect readings that were recognized correctly.


Sign in / Sign up

Export Citation Format

Share Document