Comparison Shopping Behaviour in Online Environments

Author(s):  
Carla Ruiz Mafé ◽  
Silvia Sanz Blas

The aim of this chapter is to analyse antecedents of search engines use as prepurchase information tools. Firstly, there is a literature review of the factors influencing search engines use in online purchases. Then, there is an empirical analysis of a sample of 650 Spanish E-shoppers. Logistical regression is used to analyse the influence of demographics, surfing behaviour and purchase motivations on willingness to use search engines for E-shopping. Data analysis shows that experience as Internet user and as Internet shopper are negative key drivers of search engine use. Most of the utilitarian shopping motivations analyzed predict comparison shopping behaviour. Demographics are not determinant variables in the use of search engines in online purchases. This research enables companies to know the factors that potentially affect search engine use in E-shopping decisions and the importance of using search engines in their communication campaigns.

2011 ◽  
Vol 1 (4) ◽  
pp. 64-74
Author(s):  
Anastasios A. Economides ◽  
Antonia Kontaratou

Web 2.0 applications have been increasingly recognized as important information sources for consumers, including the domain of tourism. In the center of the travelers’ interest is the use of these applications in order to compare and choose hotels for their accommodation at various tourism destinations. It is important to investigate the issues related to the presence of the hotels on some of the most dominant tourism search engines and to the prices that they present. This paper compares the search engines and determines whether the cheapest and to the most complete one can be discovered. This paper focuses on analyzing the hotel prices presented on their official websites and on the following eight tourism search engines: Booking.com, Expedia.com, Hotelclub.com, Hotels.com, Orbitz.com, Priceline.com, Travelocity.com, and Venere.com. The data analysis, by the use of the descriptive statistics, showed that only 23% of the hotels examined are found at all the search engines. Furthermore, the price analysis showed that there are differences among the search engines. Although some search engines statistically give lower prices, there is not a single search engine that always gives the lowest price for every hotel.


2018 ◽  
Vol 46 (1) ◽  
pp. 1-10
Author(s):  
Yi Li ◽  
Zhihui Yuan ◽  
Yujie Li ◽  
Jing Liu

We analyzed the effect of individual factors, contextual factors, and perception of search engine advertising on users' search engine usage behavior. The sample comprised 404 Chinese who used search engines in the context of their paid employment. Results showed that (a) perceived search skills and perceived search engine reliance significantly and positively impacted users' general search engine usage, (b) perceived advertising clutter reduced the beneficial effects of perceived search skills on users' general search engine usage, (c) users with higher perceived search engine reliance preferred search engines to other online search methods, and (d) prior negative experience reduced the positive link between perceived search engine reliance and users' specific search engine usage. Our findings suggest that search engine designers and operators should focus on individual and contextual factors influencing search engine usage behavior, and should consider users' perception of advertising on search engine programs.


2019 ◽  
Vol 71 (3) ◽  
pp. 310-324
Author(s):  
Dirk Lewandowski ◽  
Sebastian Sünkler

Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google’s top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.


Author(s):  
Anastasios A. Economides ◽  
Antonia Kontaratou

Web 2.0 applications have been increasingly recognized as important information sources for consumers, including the domain of tourism. In the center of the travelers’ interest is the use of these applications in order to compare and choose hotels for their accommodation at various tourism destinations. It is important to investigate the issues related to the presence of the hotels on some of the most dominant tourism search engines and to the prices that they present. This paper compares the search engines and determines whether the cheapest and to the most complete one can be discovered. This paper focuses on analyzing the hotel prices presented on their official websites and on the following eight tourism search engines: Booking.com, Expedia.com, Hotelclub.com, Hotels.com, Orbitz.com, Priceline.com, Travelocity.com, and Venere.com. The data analysis, by the use of the descriptive statistics, showed that only 23% of the hotels examined are found at all the search engines. Furthermore, the price analysis showed that there are differences among the search engines. Although some search engines statistically give lower prices, there is not a single search engine that always gives the lowest price for every hotel.


2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


2020 ◽  
Vol 19 (10) ◽  
pp. 1602-1618 ◽  
Author(s):  
Thibault Robin ◽  
Julien Mariethoz ◽  
Frédérique Lisacek

A key point in achieving accurate intact glycopeptide identification is the definition of the glycan composition file that is used to match experimental with theoretical masses by a glycoproteomics search engine. At present, these files are mainly built from searching the literature and/or querying data sources focused on posttranslational modifications. Most glycoproteomics search engines include a default composition file that is readily used when processing MS data. We introduce here a glycan composition visualizing and comparative tool associated with the GlyConnect database and called GlyConnect Compozitor. It offers a web interface through which the database can be queried to bring out contextual information relative to a set of glycan compositions. The tool takes advantage of compositions being related to one another through shared monosaccharide counts and outputs interactive graphs summarizing information searched in the database. These results provide a guide for selecting or deselecting compositions in a file in order to reflect the context of a study as closely as possible. They also confirm the consistency of a set of compositions based on the content of the GlyConnect database. As part of the tool collection of the Glycomics@ExPASy initiative, Compozitor is hosted at https://glyconnect.expasy.org/compozitor/ where it can be run as a web application. It is also directly accessible from the GlyConnect database.


2001 ◽  
Vol 1 (3) ◽  
pp. 28-31 ◽  
Author(s):  
Valerie Stevenson

Looking back to 1999, there were a number of search engines which performed equally well. I recommended defining the search strategy very carefully, using Boolean logic and field search techniques, and always running the search in more than one search engine. Numerous articles and Web columns comparing the performance of different search engines came to different conclusions on the ‘best’ search engines. Over the last year, however, all the speakers at conferences and seminars I have attended have recommended Google as their preferred tool for locating all kinds of information on the Web. I confess that I have now abandoned most of my carefully worked out search strategies and comparison tests, and use Google for most of my own Web searches.


2010 ◽  
Vol 44-47 ◽  
pp. 4041-4049 ◽  
Author(s):  
Hong Zhao ◽  
Chen Sheng Bai ◽  
Song Zhu

Search engines can bring a lot of benefit to the website. For a site, each page’s search engine ranking is very important. To make web page ranking in search engine ahead, Search engine optimization (SEO) make effect on the ranking. Web page needs to set the keywords as “keywords" to use SEO. The paper focuses on the content of a given word, and extracts the keywords of each page by calculating the word frequency. The algorithm is implemented by C # language. Keywords setting of webpage are of great importance on the information and products


2019 ◽  
Vol 71 (1) ◽  
pp. 54-71 ◽  
Author(s):  
Artur Strzelecki

Purpose The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so they can be removed from the search results. Design/methodology/approach Undertakes a deep analysis of more than 3.2bn removed pages from Google’s search results requested by reporting organizations from 2011 to 2018 and over 460m removed pages from Bing’s search results requested by reporting organizations from 2015 to 2017. The paper focuses on pages that belong to the .pl country coded top-level domain (ccTLD). Findings Although the number of requests to remove data from search results has been growing year on year, fewer URLs have been reported in recent years. Some of the requests are, however, unjustified and are rejected by teams representing the search engines. In terms of reporting copyright violations, one company in particular stands out (AudioLock.Net), accounting for 28.1 percent of all reports sent to Google (the top ten companies combined were responsible for 61.3 percent of the total number of reports). Research limitations/implications As not every request can be published, the study is based only what is publicly available. Also, the data assigned to Poland is only based on the ccTLD domain name (.pl); other domain extensions for Polish internet users were not considered. Originality/value This is first global analysis of data from transparency reports published by search engine companies as prior research has been based on specific notices.


Author(s):  
Novario Jaya Perdana

The accuracy of search result using search engine depends on the keywords that are used. Lack of the information provided on the keywords can lead to reduced accuracy of the search result. This means searching information on the internet is a hard work. In this research, a software has been built to create document keywords sequences. The software uses Google Latent Semantic Distance which can extract relevant information from the document. The information is expressed in the form of specific words sequences which could be used as keyword recommendations in search engines. The result shows that the implementation of the method for creating document keyword recommendation achieved high accuracy and could finds the most relevant information in the top search results.


Sign in / Sign up

Export Citation Format

Share Document