scholarly journals Search Engine-inspired Ranking Algorithm for Trading Networks

Author(s):  
Andri Mirzal

<p>Ranking algorithms based on link structure of the network are well-known methods in web search engines to improve the quality of the searches. The most famous ones are PageRank and HITS. PageRank uses probability of random surfers to visit a page as the score of that page, and HITS instead of produces one score, proposes using two scores, authority and hub scores, where the authority scores describe the degree of popularity of pages and hub scores describe the quality of hyperlinks on pages. In this paper, we show the differences between WWW network and trading network, and use these differences to create a ranking algorithm for trading networks. We test our proposed method with international trading data from United Nations. The similarity measures between vectors of proposed algorithm and vector of standard measure give promising results.</p>

2018 ◽  
Vol 52 (3) ◽  
pp. 329-350 ◽  
Author(s):  
Abhishek Kumar Singh ◽  
Naresh Kumar Nagwani ◽  
Sudhakar Pandey

Purpose Recently, with a high volume of users and user’s content in Community Question Answering (CQA) sites, the quality of answers provided by users has raised a big concern. Finding the expert users can be a method to address this problem, which aims to find the suitable users (answerers) who can provide high-quality relevant answers. The purpose of this paper is to find the expert users for the newly posted questions of the CQA sites. Design/methodology/approach In this paper, a new algorithm, RANKuser, is proposed for identifying the expert users of CQA sites. The proposed RANKuser algorithm consists of three major stages. In the first stage, folksonomy relation between users, tags, and queries is established. User profile attributes, namely, reputation, tags, and badges, are also considered in folksonomy. In the second stage, expertise scores of the user are calculated based on reputation, badges, and tags. Finally, in the third stage, the expert users are identified by extracting top N users based on expertise score. Findings In this work, with the help of proposed ranking algorithm, expert users are identified for newly posted questions. In this paper, comparison of proposed user ranking algorithm (RANKuser) is also performed with other existing ranking algorithms, namely, ML-KNN, rankSVM, LDA, STM CQARank, and EV-based model using performance parameters such as hamming loss, accuracy, average precision, one error, F-measure, and normalized discounted cumulative gain. The proposed ranking method is also compared to the original ranking of CQA sites using the paired t-test. The experimental results demonstrate the effectiveness of the proposed RANKuser algorithm in comparison with the existing ranking algorithms. Originality/value This paper proposes and implements a new algorithm for expert user identification in CQA sites. By utilizing the folksonomy in CQA sites and information of user profile, this algorithm identifies the experts.


2021 ◽  
Author(s):  
Xiangyi Chen

Text, link and usage information are the most commonly used sources in the ranking algorithm of a web search engine. In this thesis, we argue that the quality of the web pages such as the performance of the page delivery (e.g. reliability and response time) should also play an important role in ranking, especially for users with a slow Internet connection or mobile users. Based on this principle, if two pages have the same level of relevancy to a query, the one with a higher delivery quality (e.g. faster response) should be ranked higher. We define several important attributes for the Quality of Service (QoS) and explain how we rank the web pages based on these algorithms. In addition, while combining those QoS attributes, we have tested and compared different aggregation algorithms. The experiment results show that our proposed algorithms can promote the pages with a higher delivery quality to higher positions in the result list, which is beneficial to users to improve their overall experiences of using the search engine and QoS based re-ranking algorithm always gets the best performance.


2020 ◽  
Vol 6 ◽  
pp. e310
Author(s):  
Ivica Slavkov ◽  
Matej Petković ◽  
Pierre Geurts ◽  
Dragi Kocev ◽  
Sašo Džeroski

In this article, we propose a method for evaluating feature ranking algorithms. A feature ranking algorithm estimates the importance of descriptive features when predicting the target variable, and the proposed method evaluates the correctness of these importance values by computing the error measures of two chains of predictive models. The models in the first chain are built on nested sets of top-ranked features, while the models in the other chain are built on nested sets of bottom ranked features. We investigate which predictive models are appropriate for building these chains, showing empirically that the proposed method gives meaningful results and can detect differences in feature ranking quality. This is first demonstrated on synthetic data, and then on several real-world classification benchmark problems.


2021 ◽  
Author(s):  
Xiangyi Chen

Text, link and usage information are the most commonly used sources in the ranking algorithm of a web search engine. In this thesis, we argue that the quality of the web pages such as the performance of the page delivery (e.g. reliability and response time) should also play an important role in ranking, especially for users with a slow Internet connection or mobile users. Based on this principle, if two pages have the same level of relevancy to a query, the one with a higher delivery quality (e.g. faster response) should be ranked higher. We define several important attributes for the Quality of Service (QoS) and explain how we rank the web pages based on these algorithms. In addition, while combining those QoS attributes, we have tested and compared different aggregation algorithms. The experiment results show that our proposed algorithms can promote the pages with a higher delivery quality to higher positions in the result list, which is beneficial to users to improve their overall experiences of using the search engine and QoS based re-ranking algorithm always gets the best performance.


2020 ◽  
Vol 4 (2) ◽  
pp. 14-25 ◽  
Author(s):  
Sandeep Suri ◽  
Arushi Gupta ◽  
Kapil Sharma

With the evolution in technology huge amount of data is being generated, and extracts the necessary data from large volumes of data. This process is significantly complex. Generally the web contains bulk of raw data and the process of converting this data to information mining process can be performed. At whatever point the user places some inquiry on particular web search tool, outcomes are produced with respect to the requests which are dependent on the magnitude of the document created via web information retrieval tools. The results are obtained using calculations and implementation of well written algorithms. Well known web search tools like Google and other varied engines contain their specific manner to compute the page rank, various outcomes are obtained on various web crawlers for a same inquiry because the method for deciding the importance of the sites contrasts among number of algorithm. In this research, an attempt to analyze well-known page ranking calculation on the basis of their quality and shortcomings. This paper places the light on a portion of the extremely mainstream ranking algorithm and attempts to discover a better arrangement that can optimize the time spent on looking through the list of sites.


Author(s):  
Mark Newman

This chapter gives a discussion of search processes on networks. It begins with a discussion of web search, including crawlers and web ranking algorithms such as PageRank. Search in distributed databases such as peer-to-peer networks is also discussed, including simple breadth-first search style algorithms and more advanced “supernode” approaches. Finally, network navigation is discussed at some length, motivated by consideration of Milgram's letter passing experiment. Kleinberg's variant of the small-world model is introduced and it is shown that efficient navigation is possible only for certain values of the model parameters. Similar results are also derived for the hierarchical model of Watts et al.


2021 ◽  
pp. 146531252110216
Author(s):  
Annabelle Carter ◽  
Susan Stokes

Objective: To identify the number of companies providing Do-It-Yourself (DIY) orthodontics and explore information available on websites for DIY brace providers operating in the UK. Design: Web search and review of websites providing DIY braces. Setting: Leeds, UK. Methods: A Web search was completed in November 2020 and April 2021 of all companies providing DIY braces for UK consumers. Each website was evaluated, and the following data collected: name; year started operating; costs; process; involvement of a dental professional; average ‘treatment’ length; retention; consent process; information on risks and benefits; aligner material; social media presence; age suitability; and consumer ratings on Trustpilot. Quality of website information was assessed via the DISCERN tool. Results: Seven DIY orthodontic companies were operating in the UK. Websites reviewed revealed the following: product costs were in the range of £799–£1599, ‘treatment’ length quotes were in the range of 4–12 months; Trustpilot reviews were in the range of 1.6–4.8 stars; and websites claimed their aligners were suitable for individuals with an age range of 12–18 years. Quality of content regarding risks described on websites varied, and there was limited information regarding involvement of a dental professional. Quality of websites information scored ‘poor’ or ‘very poor’ on the DISCERN scoring. Conclusions: There has been an increase in the number of DIY orthodontic companies operating in the UK over the last three years. There is a need to determine whether these products constitute dental treatment in their own right. If so, it is crucial to ensure these are regulated appropriately with adequate information available to satisfy informed consent and have greater transparency over dental professional involvement to safeguard the public.


2017 ◽  
Vol 17 (1) ◽  
Author(s):  
Dermot Leahy ◽  
Catia Montagna

AbstractWe bridge the organisational economics and industrial economics literatures on the vertical boundaries of the firm by contextualising the transaction cost approach to the make-or-buy decision within an oligopolistic market structure. Firms invest in the quality of the intermediate resulting in the endogenous determination of the price of the intermediate and marginal production cost of the final good. We highlight new strategic incentives to outsource and/or vertically integrate and show how these incentives can result in asymmetric-mode-of-operations, investment and costs. We apply our model to a number of different international trading setups.


2021 ◽  
Author(s):  
Xuan Thao Nguyen ◽  
Shuo Yan Chou

Abstract Intuitionistic fuzzy sets (IFSs), including member and nonmember functions, have many applications in managing uncertain information. The similarity measures of IFSs proposed to represent the similarity between different types of sensitive fuzzy information. However, some existing similarity measures do not meet the axioms of similarity. Moreover, in some cases, they could not be applied appropriately. In this study, we proposed some novel similarity measures of IFSs constructed by combining the exponential function of membership functions and the negative function of non-membership functions. In this paper, we also proposed a new entropy measure as a stepping stone to calculate the weights of the criteria in the proposed multi-criteria decision making (MCDM) model. The similarity measures used to rank alternatives in the model. Finally, we used this MCDM model to evaluate the quality of software projects.


Sign in / Sign up

Export Citation Format

Share Document