scholarly journals Google Web and Image Search Visibility Data for Online Store

Data ◽  
2019 ◽  
Vol 4 (3) ◽  
pp. 125 ◽  
Author(s):  
Artur Strzelecki

This data descriptor describes Google search engine visibility data. The visibility of a domain name in a search engine comes from search engine optimization and can be evaluated based on four data metrics and five data dimensions. The data metrics are the following: Clicks volume (1), impressions volume (2), click-through ratio (3), and ranking position (4). Data dimensions are as follows: queries that are entered into search engines that trigger results with the researched domain name (1), page URLs from research domains which are available in the search engine results page (2), country of origin of search engine visitors (3), type of device used for the search (4), and date of the search (5). Search engine visibility data were obtained from the Google search console for the international online store, which is visible in 240 countries and territories for a period of 15 months. The data contain 123 K clicks and 4.86 M impressions for the web search and 22 K clicks and 9.07 M impressions for the image search. The proposed method for obtaining data can be applied in any other area, not only in the e-commerce industry.

2019 ◽  
Vol 16 (8) ◽  
pp. 3216-3218
Author(s):  
A. Viji Amutha Mary ◽  
Konduru Sandeep Kumar ◽  
Kesa Pavan Sri Sai

An automatic approach to extract Geographic information especially represented by Points of Interest (POIs), is critical for identifying locations and provides the basis for various location-based services. Currently, geospatial data of POI are available through some open map services (e.g., Google Maps, OpenStreetMap, etc.). However, the data supporting these services are either collected through the expensive commercial purchasing and company investment or gathered by the volunteered contribution of high uncertainty. With the rapid geospatial data growing on the Web, we propose an automatic approach of extracting geographic information for building up POI resources based on the results obtained by the web search engines to mitigate the negative effect from the traditional means. According to the approach, we firstly put the types of POIs extracted from Google Maps and the street names obtained from OpenStreetMap into the Google search engine, and then retrieve the potential addresses of POIs through parsing the search results. Secondly, the Google search engine is employed again with the retrieved addresses of POIs to extract the potential place names. Finally, the Google search engine is employed for a third time with learning both the place names and the corresponding addresses to verify whether the place names are correct. The contributed output of the work is a place-name dataset. We respectively select 20 blocks in Chicago and Houston in U.S. to execute our approach for verifying the research contribution. In the experiments, we choose Google Map that is of high data quality as the reference and compare the results with those from OpenStreet Map and Wikimapia. The final results indicate that the proposed approach could effectively produce the place-name datasets on a par with Google Maps and outperform OpenStreet Map and Wikimapia.


2013 ◽  
pp. 1325-1345
Author(s):  
Andrew Boulton ◽  
Lomme Devriendt ◽  
Stanley D. Brunn ◽  
Ben Derudder ◽  
Frank Witlox

Geographers and social scientists have long been interested in ranking and classifying the cities of the world. The cutting edge of this research is characterized by a recognition of the crucial importance of information and, specifically, ICTs to cities’ positions in the current Knowledge Economy. This chapter builds on recent “cyberspace” analyses of the global urban system by arguing for, and demonstrating empirically, the value of Web search engine data as a means of understanding cities as situated within, and constituted by, flows of digital information. To this end, the authors show how the Google search engine can be used to specify a dynamic, informational classification of North American cities based on both the production and the consumption of Web information about two prominent current issues global in scope: the global financial crisis, and global climate change.


Author(s):  
Aboubakr Aqle ◽  
Dena Al-Thani ◽  
Ali Jaoua

AbstractThere are limited studies that are addressing the challenges of visually impaired (VI) users when viewing search results on a search engine interface by using a screen reader. This study investigates the effect of providing an overview of search results to VI users. We present a novel interactive search engine interface called InteractSE to support VI users during the results exploration stage in order to improve their interactive experience and web search efficiency. An overview of the search results is generated using an unsupervised machine learning approach to present the discovered concepts via a formal concept analysis that is domain-independent. These concepts are arranged in a multi-level tree following a hierarchical order and covering all retrieved documents that share maximal features. The InteractSE interface was evaluated by 16 legally blind users and compared with the Google search engine interface for complex search tasks. The evaluation results were obtained based on both quantitative (as task completion time) and qualitative (as participants’ feedback) measures. These results are promising and indicate that InteractSE enhances the search efficiency and consequently advances user experience. Our observations and analysis of the user interactions and feedback yielded design suggestions to support VI users when exploring and interacting with search results.


Author(s):  
Ji-Rong Wen

Web query log is a type of file keeping track of the activities of the users who are utilizing a search engine. Compared to traditional information retrieval setting in which documents are the only information source available, query logs are an additional information source in the Web search setting. Based on query logs, a set of Web mining techniques, such as log-based query clustering, log-based query expansion, collaborative filtering and personalized search, could be employed to improve the performance of Web search.


Author(s):  
Hengki Tamando Sihotang

Online information needs have evolved in the real direction. These needs include the latest information, government services, and commercial products. The research question is how to describe and optimize keyword research with the allintitle technique on the google search engine. The development method used in this research is the prototype method because it is considered able to be evaluated directly on the user. The system testing is done for 3 months by placing keywords on several websites on Google. The conclusion that can be taken is to use the allintitle technique, the search results for the web are easier to find. And this web-based allintitle technique can overcome the challenges of captcha verification from the Google search engine.   Keywords: Allintitle, Google's Search Engine, Keyword competition.


Author(s):  
Pavel Šimek ◽  
Jiří Vaněk ◽  
Jan Jarolímek

The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.


Compiler ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 71
Author(s):  
Aris Wahyu Murdiyanto ◽  
Adri Priadana

Keyword research is one of the essential activities in Search Engine Optimization (SEO). One of the techniques in doing keyword research is to find out how many articles titles on a website indexed by the Google search engine contain a particular keyword or so-called "allintitle". Moreover, search engines are also able to provide keywords suggestion. Getting keywords suggestions and allintitle will not be effective, efficient, and economical if done manually for relatively extensive keyword research. It will take a long time to decide whether a keyword is needed to be optimized. Based on these problems, this study aimed to analyze the implementation of the web scraping technique to get relevant keyword suggestions from the Google search engine and the number of "allintitle" that are owned automatically. The data used as an experiment in this test consists of ten keywords, which each keyword would generate a maximum of ten keywords suggestion. Therefore, from ten keywords, it will produce at most 100 keywords suggestions and the number of allintitles. Based on the evaluation result, we got an accuracy of 100%. It indicated that the technique could be applied to get keywords suggestions and allintitle from Google search engines with outstanding accuracy values.


2019 ◽  
Author(s):  
Muhammad Ilham Verardi Pradana

Thanks to the existence of Search engines, all of informations and datas could be easily found in the internet, one of the search engine that users use the most is Google. Google still be the most popular search engine to provide any informations available on the internet. The search result that Google provide, doesn't always give the result we wanted. Google just displayed the results based on the keyword we type. So sometimes, they show us the negative contents on the internet, such as pornography, pornsites, and many more that seems to be related to the keyword, whether the title or the other that makes the result going that way. In this paper, we will implement the "DNS SEHAT" to pass along client's request queries so the Google search engine on the client's side will provide more relevant search results without any negative contents.


Author(s):  
GAURAV AGARWAL ◽  
SACHI GUPTA ◽  
SAURABH MUKHERJEE

Today, web servers, are the key repositories of the information & internet is the source of getting this information. There is a mammoth data on the Internet. It becomes a difficult job to search out the accordant data. Search Engine plays a vital role in searching the accordant data. A search engine follows these steps: Web crawling by crawler, Indexing by Indexer and Searching by Searcher. Web crawler retrieves information of the web pages by following every link on the site. Which is stored by web search engine then the content of the web page is indexed by the indexer. The main role of indexer is how data can be catch soon as per user requirements. As the client gives a query, Search Engine searches the results corresponding to this query to provide excellent output. Here ambition is to enroot an algorithm for search engine which may response most desirable result as per user requirement. In this a ranking method is used by the search engine to rank the web pages. Various ranking approaches are discussed in literature but in this paper, ranking algorithm is proposed which is based on parent-child relationship. Proposed ranking algorithm is based on priority assignment phase of Heterogeneous Earliest Finish Time (HEFT) Algorithm which is designed for multiprocessor task scheduling. Proposed algorithm works on three on range variable its means the density of keywords, number of successors to the nodes and the age of the web page. Density shows the occurrence of the keyword on the particular web page. Numbers of successors represent the outgoing link to a single web page. Age is the freshness value of the web page. The page which is modified recently is the freshest page and having the smallest age or largest freshness value. Proposed Technique requires that the priorities of each page to be set with the downward rank values & pages are arranged in ascending/ Descending order of their rank values. Experiments show that our algorithm is valuable. After the comparison with Google we find that our Algorithm is performing better. For 70% problems our algorithm is working better than Google.


Author(s):  
Ji-Rong Wen

Web query log is a type of file keeping track of the activities of the users who are utilizing a search engine. Compared to traditional information retrieval setting in which documents are the only information source available, query logs are an additional information source in the Web search setting. Based on query logs, a set of Web mining techniques, such as log-based query clustering, log-based query expansion, collaborative filtering and personalized search, could be employed to improve the performance of Web search.


Sign in / Sign up

Export Citation Format

Share Document