What does Google recommend when you want to compare insurance offerings?

2019 ◽  
Vol 71 (3) ◽  
pp. 310-324
Author(s):  
Dirk Lewandowski ◽  
Sebastian Sünkler

Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google’s top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.

2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yi-Hsi Lee ◽  
Ming-Hua Hsieh ◽  
Weiyu Kuo ◽  
Chenghsien Jason Tsai

PurposeIt is quite possible that financial institutions including life insurance companies would encounter turbulent situations such as the COVID-19 pandemic before policies mature. Constructing models that can generate scenarios for major assets to cover abrupt changes in financial markets is thus essential for the financial institution's risk management.Design/methodology/approachThe key issues in such modeling include how to manage the large number of risk factors involved, how to model the dynamics of chosen or derived factors and how to incorporate relations among these factors. The authors propose the orthogonal ARMA–GARCH (autoregressive moving-average–generalized autoregressive conditional heteroskedasticity) approach to tackle these issues. The constructed economic scenario generation (ESG) models pass the backtests covering the period from the beginning of 2018 to the end of May 2020, which includes the turbulent situations caused by COVID-19.FindingsThe backtesting covering the turbulent period of COVID-19, along with fan charts and comparisons on simulated and historical statistics, validates our approach.Originality/valueThis paper is the first one that attempts to generate complex long-term economic scenarios for a large-scale portfolio from its large dimensional covariance matrix estimated by the orthogonal ARMA–GARCH model.


2019 ◽  
Vol 71 (1) ◽  
pp. 54-71 ◽  
Author(s):  
Artur Strzelecki

Purpose The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so they can be removed from the search results. Design/methodology/approach Undertakes a deep analysis of more than 3.2bn removed pages from Google’s search results requested by reporting organizations from 2011 to 2018 and over 460m removed pages from Bing’s search results requested by reporting organizations from 2015 to 2017. The paper focuses on pages that belong to the .pl country coded top-level domain (ccTLD). Findings Although the number of requests to remove data from search results has been growing year on year, fewer URLs have been reported in recent years. Some of the requests are, however, unjustified and are rejected by teams representing the search engines. In terms of reporting copyright violations, one company in particular stands out (AudioLock.Net), accounting for 28.1 percent of all reports sent to Google (the top ten companies combined were responsible for 61.3 percent of the total number of reports). Research limitations/implications As not every request can be published, the study is based only what is publicly available. Also, the data assigned to Poland is only based on the ccTLD domain name (.pl); other domain extensions for Polish internet users were not considered. Originality/value This is first global analysis of data from transparency reports published by search engine companies as prior research has been based on specific notices.


Author(s):  
Adan Ortiz-Cordova ◽  
Bernard J. Jansen

In this research study, the authors investigate the association between external searching, which is searching on a web search engine, and internal searching, which is searching on a website. They classify 295,571 external – internal searches where each search is composed of a search engine query that is submitted to a web search engine and then one or more subsequent queries submitted to a commercial website by the same user. The authors examine 891,453 queries from all searches, of which 295,571 were external search queries and 595,882 were internal search queries. They algorithmically classify all queries into states, and then clustered the searching episodes into major searching configurations and identify the most commonly occurring search patterns for both external, internal, and external-to-internal searching episodes. The research implications of this study are that external sessions and internal sessions must be considered as part of a continuous search episode and that online businesses can leverage external search information to more effectively target potential consumers.


2011 ◽  
Vol 1 (4) ◽  
pp. 64-74
Author(s):  
Anastasios A. Economides ◽  
Antonia Kontaratou

Web 2.0 applications have been increasingly recognized as important information sources for consumers, including the domain of tourism. In the center of the travelers’ interest is the use of these applications in order to compare and choose hotels for their accommodation at various tourism destinations. It is important to investigate the issues related to the presence of the hotels on some of the most dominant tourism search engines and to the prices that they present. This paper compares the search engines and determines whether the cheapest and to the most complete one can be discovered. This paper focuses on analyzing the hotel prices presented on their official websites and on the following eight tourism search engines: Booking.com, Expedia.com, Hotelclub.com, Hotels.com, Orbitz.com, Priceline.com, Travelocity.com, and Venere.com. The data analysis, by the use of the descriptive statistics, showed that only 23% of the hotels examined are found at all the search engines. Furthermore, the price analysis showed that there are differences among the search engines. Although some search engines statistically give lower prices, there is not a single search engine that always gives the lowest price for every hotel.


2019 ◽  
Vol 1 (1) ◽  
pp. 82-95
Author(s):  
Ning Ma ◽  
Can Li ◽  
Yang Zuo

Purpose Forest insurance is a popular way to reduce the loss of forest disasters, so it is necessary to actively involve stakeholders. In the multi-agent simulation model, the government, insurance companies and forest farmers participate as three main stakeholders. The purpose of this paper is to mainly simulate the behavior of forest farmers under different environmental variables in order to find the important factors affecting the coverage of forest insurance, so as to improve the ability of forest farmers to resist risks in the face of disasters. Design/methodology/approach In the simulation process, the decision-making rule of a forest farmer’s purchasing behavior is a binary selection chain, which is created at random. Forest farmer agents who adapt to the environment will remain; on the contrary, those will be eliminated. The eliminated agents will renew their behavior selection chains through learning others’ successful behavior based on genetic algorithm. The multi-agent mode is set up on the Eclipse platform by using Java language. Findings The adjustment simulation experiments of insurance premium, insurance subsidy and forest area were carried out. According to the result, conclusions and suggestions are as follows: at present, government subsidies are necessary for the implementation of forest insurance; in the future, with the expansion of the insured forest area and the upgrading and large-scale operation of forest farms, forest farmers will be more willing to join forest insurance program, and, then, the implementation of forest insurance no longer requires government subsidies for forest insurance premiums. Originality/value This paper explores the impact of three important factors on the implementation of forest insurance.


2019 ◽  
Vol 37 (1) ◽  
pp. 173-184 ◽  
Author(s):  
Aabid Hussain ◽  
Sumeer Gul ◽  
Tariq Ahmad Shah ◽  
Sheikh Shueb

Purpose The purpose of this study is to explore the retrieval effectiveness of three image search engines (ISE) – Google Images, Yahoo Image Search and Picsearch in terms of their image retrieval capability. It is an effort to carry out a Cranfield experiment to know how efficient the commercial giants in the image search are and how efficient an image specific search engine is. Design/methodology/approach The keyword search feature of three ISEs – Google images, Yahoo Image Search and Picsearch – was exploited to make search with keyword captions of photos as query terms. Selected top ten images were used to act as a testbed for the study, as images were searched in accordance with features of the test bed. Features to be looked for included size (1200 × 800), format of images (JPEG/JPG) and the rank of the original image retrieved by ISEs under study. To gauge the overall retrieval effectiveness in terms of set standards, only first 50 result hits were checked. Retrieval efficiency of select ISEs were examined with respect to their precision and relative recall. Findings Yahoo Image Search outscores Google Images and Picsearch both in terms of precision and relative recall. Regarding other criteria – image size, image format and image rank in search results, Google Images is ahead of others. Research limitations/implications The study only takes into consideration basic image search feature, i.e. text-based search. Practical implications The study implies that image search engines should focus on relevant descriptions. The study evaluated text-based image retrieval facilities and thereby offers a choice to users to select best among the available ISEs for their use. Originality/value The study provides an insight into the effectiveness of the three ISEs. The study is one of the few studies to gauge retrieval effectiveness of ISEs. Study also produced key findings that are important for all ISE users and researchers and the Web image search industry. Findings of the study will also prove useful for search engine companies to improve their services.


2018 ◽  
Vol 25 (2) ◽  
pp. 374-399 ◽  
Author(s):  
Amber A. Smith-Ditizio ◽  
Alan David Smith ◽  
Walter R. Kendall

Purpose The purpose of this paper is to provide useful insights underlying the popularity of search engine technologies within a social media-intensive environment. Design/methodology/approach The degree of social interaction for social media platforms that integrate search engine technologies as part of the homepage and related experience is very mixed on part of its users. Through Barnard’ theory of authority acceptance, social media and its popularity may be examined by the ability of its users to create effective messages that can be broadcasted to many, yet controlled by individual. The hypotheses tested the interaction of social media and search engine with gender and technological ease-of-use factors. Findings The statistical evidence suggested that significant technological and ease-of-use aspects of search engines are not meaningful, based on gender alone. Males may slightly be prone to take advantage of such technologies, but their search and use patterns are not much varied from their female counterparts. Social media, generally more fully captured authority in individual search patterns, and a number of interactions among gender status, search engine characteristics, and social media were found to be significant and profound. The testing of these hypotheses directly reflect the complexities of unique needs among users of search engines within a social media environment. Practical implications Search engine technologies with a social media context has allowed for the development of a modern, user-driven internet experience that has been powered by users’ imagination and is designed to at least partially satisfy users’ need for self-directed engagement. Organizations are well advised to provide a mindful, less controlled, and more interactive presence of potential users, especially through an increasingly mobile presence. Originality/value Individuals as well as organizations are rapidly discovering that it is becoming easier to share and distribute their content, especially for more creative and innovative content, among all of its users. As businesses continue to focus on the quality of one’s own content, individuals are increasingly taking advantage of some tools to exert more control over their experiences and what they are willing to share, resulting in more user-based partnerships will formulate. As the transition of traditional forms of marketing to newer forms of integrated marketing, the future for search engines as marketing tools by social media users appears to be very promising in adding contextual content within users’ homepage.


2019 ◽  
Vol 202 (4) ◽  
Author(s):  
Irene L. G. Newton ◽  
Danny W. Rice

ABSTRACT The most common intracellular symbiont on the planet—Wolbachia pipientis—is infamous largely for the reproductive manipulations induced in its host. However, more recent evidence suggests that this bacterium may also serve as a nutritional mutualist in certain host backgrounds and for certain metabolites. We performed a large-scale analysis of conserved gene content across all sequenced Wolbachia genomes to infer potential nutrients made by these symbionts. We review and critically evaluate the prior research supporting a beneficial role for Wolbachia and suggest future experiments to test hypotheses of metabolic provisioning.


2014 ◽  
Vol 38 (2) ◽  
pp. 209-231 ◽  
Author(s):  
Darja Groselj

Purpose – This study aims to map the information landscape as it unfolds to users when they search for health topics on general search engines. Website sponsorship, platform type and linking patterns were analysed in order to advance the understanding of the provision of health information online. Design/methodology/approach – The landscape was sampled by ten very different search queries and crawled with VOSON software. Drawing on Roger's framework of information politics on the web, the landscape is described on two levels. The front-end is examined qualitatively by assessing website sponsorship and platform type. On the back-end, linking patterns are analysed using hyperlink network analysis. Findings – A vast majority of the websites have commercial and organisational sponsorship. The analysis of the platform type shows that health information is provided mainly on static homepages, informational portals and general news sites. A comparison of ten different health domains revealed substantial differences in their landscapes, related to domain-specific characteristics. Research limitations/implications – The size and properties of the web crawl were shaped by using third party software, and the generalisability of the results is limited by the selected search queries. Further research exploring how specific characteristics of different health domains shape provision of information online is suggested. Practical implications – The demonstrated method can be used by organisations to discern the characteristics of the online information landscape in which they operate and to inform their business strategies. Originality/value – The study examines health information landscapes on a large scale and makes an original contribution by comparing them across ten different health domains.


Sign in / Sign up

Export Citation Format

Share Document