scholarly journals Assessing the Level of Popularity of European Stag Tourism Destinations

2016 ◽  
Vol 35 (3) ◽  
pp. 15-29 ◽  
Author(s):  
Grzegorz Iwanicki ◽  
Anna Dłużewska ◽  
Melanie Smith Kay

Abstract The primary objective of this article is to determine the degree of popularity of stag tourism destinations in Europe. Research was based on the search engine method, involving an analysis of the highest positioned offers of travel agencies in the most commonly used search engines in Europe (Google, Bing, Yahoo). The analysis divided the studied cities into four categories in terms of popularity. Conducting the said analysis is strongly justified, because academic publications have so far not provided studies which have determined the degree of popularity of stag destinations on a continental scale.

2011 ◽  
Vol 1 (4) ◽  
pp. 64-74
Author(s):  
Anastasios A. Economides ◽  
Antonia Kontaratou

Web 2.0 applications have been increasingly recognized as important information sources for consumers, including the domain of tourism. In the center of the travelers’ interest is the use of these applications in order to compare and choose hotels for their accommodation at various tourism destinations. It is important to investigate the issues related to the presence of the hotels on some of the most dominant tourism search engines and to the prices that they present. This paper compares the search engines and determines whether the cheapest and to the most complete one can be discovered. This paper focuses on analyzing the hotel prices presented on their official websites and on the following eight tourism search engines: Booking.com, Expedia.com, Hotelclub.com, Hotels.com, Orbitz.com, Priceline.com, Travelocity.com, and Venere.com. The data analysis, by the use of the descriptive statistics, showed that only 23% of the hotels examined are found at all the search engines. Furthermore, the price analysis showed that there are differences among the search engines. Although some search engines statistically give lower prices, there is not a single search engine that always gives the lowest price for every hotel.


Author(s):  
Anastasios A. Economides ◽  
Antonia Kontaratou

Web 2.0 applications have been increasingly recognized as important information sources for consumers, including the domain of tourism. In the center of the travelers’ interest is the use of these applications in order to compare and choose hotels for their accommodation at various tourism destinations. It is important to investigate the issues related to the presence of the hotels on some of the most dominant tourism search engines and to the prices that they present. This paper compares the search engines and determines whether the cheapest and to the most complete one can be discovered. This paper focuses on analyzing the hotel prices presented on their official websites and on the following eight tourism search engines: Booking.com, Expedia.com, Hotelclub.com, Hotels.com, Orbitz.com, Priceline.com, Travelocity.com, and Venere.com. The data analysis, by the use of the descriptive statistics, showed that only 23% of the hotels examined are found at all the search engines. Furthermore, the price analysis showed that there are differences among the search engines. Although some search engines statistically give lower prices, there is not a single search engine that always gives the lowest price for every hotel.


Author(s):  
Asim Shahzad ◽  
Deden Witarsyah Jacob ◽  
Nazri Mohd Nawi ◽  
Hairulnizam Mahdin ◽  
Marheni Eka Saputri

<span>Search Engines are used to search any information on the internet. <br /> The primary objective of any website owner is to list their website at the top of all the results in Search Engine Results Pages (SERPs). Search Engine Optimization is the art of increasing visibility of a website in Search Engine Result Pages. This art of improving the visibility of website requires the tools and techniques; This paper is a comprehensive survey of how a Search Engine (SE) works, types and parts of Search Engine and different techniques and tools used for Search Engine Optimization (SEO.) In this paper, we will discuss the current tools and techniques in practice for Search Engine Optimization.</span>


2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


2020 ◽  
Vol 19 (10) ◽  
pp. 1602-1618 ◽  
Author(s):  
Thibault Robin ◽  
Julien Mariethoz ◽  
Frédérique Lisacek

A key point in achieving accurate intact glycopeptide identification is the definition of the glycan composition file that is used to match experimental with theoretical masses by a glycoproteomics search engine. At present, these files are mainly built from searching the literature and/or querying data sources focused on posttranslational modifications. Most glycoproteomics search engines include a default composition file that is readily used when processing MS data. We introduce here a glycan composition visualizing and comparative tool associated with the GlyConnect database and called GlyConnect Compozitor. It offers a web interface through which the database can be queried to bring out contextual information relative to a set of glycan compositions. The tool takes advantage of compositions being related to one another through shared monosaccharide counts and outputs interactive graphs summarizing information searched in the database. These results provide a guide for selecting or deselecting compositions in a file in order to reflect the context of a study as closely as possible. They also confirm the consistency of a set of compositions based on the content of the GlyConnect database. As part of the tool collection of the Glycomics@ExPASy initiative, Compozitor is hosted at https://glyconnect.expasy.org/compozitor/ where it can be run as a web application. It is also directly accessible from the GlyConnect database.


2001 ◽  
Vol 1 (3) ◽  
pp. 28-31 ◽  
Author(s):  
Valerie Stevenson

Looking back to 1999, there were a number of search engines which performed equally well. I recommended defining the search strategy very carefully, using Boolean logic and field search techniques, and always running the search in more than one search engine. Numerous articles and Web columns comparing the performance of different search engines came to different conclusions on the ‘best’ search engines. Over the last year, however, all the speakers at conferences and seminars I have attended have recommended Google as their preferred tool for locating all kinds of information on the Web. I confess that I have now abandoned most of my carefully worked out search strategies and comparison tests, and use Google for most of my own Web searches.


2010 ◽  
Vol 44-47 ◽  
pp. 4041-4049 ◽  
Author(s):  
Hong Zhao ◽  
Chen Sheng Bai ◽  
Song Zhu

Search engines can bring a lot of benefit to the website. For a site, each page’s search engine ranking is very important. To make web page ranking in search engine ahead, Search engine optimization (SEO) make effect on the ranking. Web page needs to set the keywords as “keywords" to use SEO. The paper focuses on the content of a given word, and extracts the keywords of each page by calculating the word frequency. The algorithm is implemented by C # language. Keywords setting of webpage are of great importance on the information and products


2019 ◽  
Vol 71 (1) ◽  
pp. 54-71 ◽  
Author(s):  
Artur Strzelecki

Purpose The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so they can be removed from the search results. Design/methodology/approach Undertakes a deep analysis of more than 3.2bn removed pages from Google’s search results requested by reporting organizations from 2011 to 2018 and over 460m removed pages from Bing’s search results requested by reporting organizations from 2015 to 2017. The paper focuses on pages that belong to the .pl country coded top-level domain (ccTLD). Findings Although the number of requests to remove data from search results has been growing year on year, fewer URLs have been reported in recent years. Some of the requests are, however, unjustified and are rejected by teams representing the search engines. In terms of reporting copyright violations, one company in particular stands out (AudioLock.Net), accounting for 28.1 percent of all reports sent to Google (the top ten companies combined were responsible for 61.3 percent of the total number of reports). Research limitations/implications As not every request can be published, the study is based only what is publicly available. Also, the data assigned to Poland is only based on the ccTLD domain name (.pl); other domain extensions for Polish internet users were not considered. Originality/value This is first global analysis of data from transparency reports published by search engine companies as prior research has been based on specific notices.


Author(s):  
Novario Jaya Perdana

The accuracy of search result using search engine depends on the keywords that are used. Lack of the information provided on the keywords can lead to reduced accuracy of the search result. This means searching information on the internet is a hard work. In this research, a software has been built to create document keywords sequences. The software uses Google Latent Semantic Distance which can extract relevant information from the document. The information is expressed in the form of specific words sequences which could be used as keyword recommendations in search engines. The result shows that the implementation of the method for creating document keyword recommendation achieved high accuracy and could finds the most relevant information in the top search results.


2016 ◽  
Author(s):  
Paolo Corti ◽  
Benjamin G Lewis ◽  
Tom Kralidis ◽  
Jude Mwenda

A Spatial Database Infrastructure (SDI) is a framework of geospatial data, metadata, users and tools intended to provide the most efficient and flexible way to use spatial information. One of the key software component of a SDI is the catalogue service, needed to discover, query and manage the metadata. Catalogue services in a SDI are typically based on the Open Geospatial Consortium (OGC) Catalogue Service for the Web (CSW) standard, that defines common interfaces to access the metadata information. A search engine is a software system able to perform very fast and reliable search, with features such as full text search, natural language processing, weighted results, fuzzy tolerance results, faceting, hit highlighting and many others. The Centre of Geographic Analysis (CGA) at Harvard University is trying to integrate within its public domain SDI (named WorldMap), the benefits of both worlds (OGC catalogs and search engines). Harvard Hypermap (HHypermap) is a component that will be part of WorldMap, totally built on an open source stack, implementing an OGC catalog, based on pycsw, to provide access to metadata in a standard way, and a search engine, based on Solr/Lucene, to provide the advanced search features typically found in search engines.


Sign in / Sign up

Export Citation Format

Share Document