scholarly journals Deep Neural Network and Boosting Based Hybrid Quality Ranking for e-Commerce Product Search

2021 ◽  
Vol 5 (3) ◽  
pp. 35
Author(s):  
Mourad Jbene ◽  
Smail Tigani ◽  
Saadane Rachid ◽  
Abdellah Chehri

In the age of information overload, customers are overwhelmed with the number of products available for sale. Search engines try to overcome this issue by filtering relevant items to the users’ queries. Traditional search engines rely on the exact match of terms in the query and product meta-data. Recently, deep learning-based approaches grabbed more attention by outperforming traditional methods in many circumstances. In this work, we involve the power of embeddings to solve the challenging task of optimizing product search engines in e-commerce. This work proposes an e-commerce product search engine based on a similarity metric that works on top of query and product embeddings. Two pre-trained word embedding models were tested, the first representing a category of models that generate fixed embeddings and a second representing a newer category of models that generate context-aware embeddings. Furthermore, a re-ranking step was performed by incorporating a list of quality indicators that reflects the utility of the product to the customer as inputs to well-known ranking methods. To prove the reliability of the approach, the Amazon reviews dataset was used for experimentation. The results demonstrated the effectiveness of context-aware embeddings in retrieving relevant products and the quality indicators in ranking high-quality products.

2017 ◽  
Vol 29 (5) ◽  
pp. 1004-1016 ◽  
Author(s):  
Damir Vandic ◽  
Steven Aanen ◽  
Flavius Frasincar ◽  
Uzay Kaymak

NCICCNDA ◽  
2018 ◽  
Author(s):  
Bhagyashree S ◽  
Bindu S ◽  
Meghana K ◽  
Nisha H N ◽  
Manjunath S

2021 ◽  
Vol 11 (15) ◽  
pp. 7063
Author(s):  
Esmaeel Rezaee ◽  
Ali Mohammad Saghiri ◽  
Agostino Forestiero

With the increasing growth of different types of data, search engines have become an essential tool on the Internet. Every day, billions of queries are run through few search engines with several privacy violations and monopoly problems. The blockchain, as a trending technology applied in various fields, including banking, IoT, education, etc., can be a beneficial alternative. Blockchain-based search engines, unlike monopolistic ones, do not have centralized controls. With a blockchain-based search system, no company can lay claims to user’s data or access search history and other related information. All these data will be encrypted and stored on a blockchain. Valuing users’ searches and paying them in return is another advantage of a blockchain-based search engine. Additionally, in smart environments, as a trending research field, blockchain-based search engines can provide context-aware and privacy-preserved search results. According to our research, few efforts have been made to develop blockchain use, which include studies generally in the early stages and few white papers. To the best of our knowledge, no research article has been published in this regard thus far. In this paper, a survey on blockchain-based search engines is provided. Additionally, we state that the blockchain is an essential paradigm for the search ecosystem by describing the advantages.


2020 ◽  
Author(s):  
Diandre de Paula ◽  
Daniel Saraiva ◽  
Romeiro Natália ◽  
Nuno Garcia ◽  
Valderi Leithardt

With the growth of ubiquitous computing, context-aware computing-based applications are increasingly emerging, and these applications demonstrate the impact that context has on the adaptation process. From the context, it will be possible to adapt the application according to the requirements and needs of its users. Therefore, the quality of the context information must be guaranteed so that the application does not have an incorrect or unexpected adaptation process. But like any given data, there is the possibility of inaccuracy and/or uncertainty and so Quality of Context (QoC) plays a key role in ensuring the quality of context information and optimizing the adaptation process. To guarantee the Quality of Context it is necessary to study a quality model to be created, which will have the important function of evaluating the context information. Thus, it is necessary to ensure that the parameters and quality indicators to be used and evaluated are the most appropriate for a given type of application. This paper aims to study a context quality model for the UbiPri middleware, defining its quality indicators to ensure its proper functioning in the process of adaptation in granting access to ubiquitous environments. Keywords: QoC, Model, Context-Aware, Data, Privacy


2020 ◽  
Author(s):  
Muhammad Afzal ◽  
Fakhare Alam ◽  
Khalid Mahmood Malik ◽  
Ghaus M Malik

BACKGROUND Automatic text summarization (ATS) enables users to retrieve meaningful evidence from big data of biomedical repositories to make complex clinical decisions. Deep neural and recurrent networks outperform traditional machine-learning techniques in areas of natural language processing and computer vision; however, they are yet to be explored in the ATS domain, particularly for medical text summarization. OBJECTIVE Traditional approaches in ATS for biomedical text suffer from fundamental issues such as an inability to capture clinical context, quality of evidence, and purpose-driven selection of passages for the summary. We aimed to circumvent these limitations through achieving precise, succinct, and coherent information extraction from credible published biomedical resources, and to construct a simplified summary containing the most informative content that can offer a review particular to clinical needs. METHODS In our proposed approach, we introduce a novel framework, termed Biomed-Summarizer, that provides quality-aware Patient/Problem, Intervention, Comparison, and Outcome (PICO)-based intelligent and context-enabled summarization of biomedical text. Biomed-Summarizer integrates the prognosis quality recognition model with a clinical context–aware model to locate text sequences in the body of a biomedical article for use in the final summary. First, we developed a deep neural network binary classifier for quality recognition to acquire scientifically sound studies and filter out others. Second, we developed a bidirectional long-short term memory recurrent neural network as a clinical context–aware classifier, which was trained on semantically enriched features generated using a word-embedding tokenizer for identification of meaningful sentences representing PICO text sequences. Third, we calculated the similarity between query and PICO text sequences using Jaccard similarity with semantic enrichments, where the semantic enrichments are obtained using medical ontologies. Last, we generated a representative summary from the high-scoring PICO sequences aggregated by study type, publication credibility, and freshness score. RESULTS Evaluation of the prognosis quality recognition model using a large dataset of biomedical literature related to intracranial aneurysm showed an accuracy of 95.41% (2562/2686) in terms of recognizing quality articles. The clinical context–aware multiclass classifier outperformed the traditional machine-learning algorithms, including support vector machine, gradient boosted tree, linear regression, K-nearest neighbor, and naïve Bayes, by achieving 93% (16127/17341) accuracy for classifying five categories: aim, population, intervention, results, and outcome. The semantic similarity algorithm achieved a significant Pearson correlation coefficient of 0.61 (0-1 scale) on a well-known BIOSSES dataset (with 100 pair sentences) after semantic enrichment, representing an improvement of 8.9% over baseline Jaccard similarity. Finally, we found a highly positive correlation among the evaluations performed by three domain experts concerning different metrics, suggesting that the automated summarization is satisfactory. CONCLUSIONS By employing the proposed method Biomed-Summarizer, high accuracy in ATS was achieved, enabling seamless curation of research evidence from the biomedical literature to use for clinical decision-making.


2020 ◽  
pp. 624-650
Author(s):  
Luis Terán

With the introduction of Web 2.0, which includes users as content generators, finding relevant information is even more complex. To tackle this problem of information overload, a number of different techniques have been introduced, including search engines, Semantic Web, and recommender systems, among others. The use of recommender systems for e-Government is a research topic that is intended to improve the interaction among public administrations, citizens, and the private sector through reducing information overload on e-Government services. In this chapter, the use of recommender systems on eParticipation is presented. A brief description of the eGovernment Framework used and the participation levels that are proposed to enhance participation. The highest level of participation is known as eEmpowerment, where the decision-making is placed on the side of citizens. Finally, a set of examples for the different eParticipation types is presented to illustrate the use of recommender systems.


Author(s):  
Xin Li ◽  
Guang Rong ◽  
Michelle Carter ◽  
Jason Bennett Thatcher

With the growth of product search engines such as pricegrabber.com, web vendors have many more casual visitors. This research examines how web vendors may foster “swift trust” as a means to convert casual visitors to paying customers. We examine whether perceptions of website’s appearance features (normality, social presence and third-party links) and functionality features (security, privacy, effort expectancy and performance expectancy) positively relate to swift trust in a web vendor. Using a quasi-experimental research design, we empirically test the proposed relationships. Based on an analysis of 224 respondents, we found appearance and functionality features explained 61% of the variance in swift trust. The paper concludes with a discussion of findings and implications.


Sign in / Sign up

Export Citation Format

Share Document