scholarly journals Increasing the travel agency’s leading positions by optimizing its website

2020 ◽  
Vol 28 (3) ◽  
pp. 81-91
Author(s):  
Tetyana S. Dronova ◽  
Yana Y. Trygub

Purpose – to study website’s work and content of the travel agency on the example of the "Laspi" travel agency, identify its technical properties and offer methods to increase the web-resource leading position in the Yandex and Google search engines by performing SEO-analysis. Design/Method/Research approach. Internet resources SEO-analysis. Findings.The travel product promotion directly depends on the travel market participants' advertising tools' effectiveness, mainly travel agents. It is determined that one of the new technologies that increase the advertising effectiveness, in particular via the travel agencies’ web resources, is SEO-technology. The authors Identified technical shortcomings of its operation, mainly related to search queries statistics, the subject site visits, the semantic core operation, the site improvement, the site increasing citation, and the number of persistent references in the network. It is proved that updating site development, changing its environment, analyzing user behavior, namely the Og Properties micro markup, updating HTML tags, analytical programs placing, iframe objects selection, and other activities, increase the content uniqueness. As a result, search engines scanned the site, and the search results took first place for the positions essential for the web resource. Originality/Value. The leading positions increasing mechanism application, website operation optimization allow search engines to bring it to the TOP of the most popular travel sites. Theoretical implications. To optimize the web resource operation, a mechanism for improving its leading position is proposed that includes three steps: the general website characteristics of marketing, SEO-analysis, recommendations provision. Practical implications. The research is practical in improving the site’s technical operation and increasing its leading position in Yandex and Google search engines. Research limitations/Future research. Further research aims at the site further analysis after making the proposed changes to its operation. Paper type – empirical.  

10.28945/4176 ◽  
2019 ◽  
Vol 14 ◽  
pp. 027-044 ◽  
Author(s):  
Da Thon Nguyen ◽  
Hanh T Tan ◽  
Duy Hoang Pham

Aim/Purpose: In this article, we provide a better solution to Webpage access prediction. In particularly, our core proposed approach is to increase accuracy and efficiency by reducing the sequence space with integration of PageRank into CPT+. Background: The problem of predicting the next page on a web site has become significant because of the non-stop growth of Internet in terms of the volume of contents and the mass of users. The webpage prediction is complex because we should consider multiple kinds of information such as the webpage name, the contents of the webpage, the user profile, the time between webpage visits, differences among users, and the time spent on a page or on each part of the page. Therefore, webpage access prediction draws substantial effort of the web mining research community in order to obtain valuable information and improve user experience as well. Methodology: CPT+ is a complex prediction algorithm that dramatically offers more accurate predictions than other state-of-the-art models. The integration of the importance of every particular page on a website (i.e., the PageRank) regarding to its associations with other pages into CPT+ model can improve the performance of the existing model. Contribution: In this paper, we propose an approach to reduce prediction space while improving accuracy through combining CPT+ and PageRank algorithms. Experimental results on several real datasets indicate the space reduced by up to between 15% and 30%. As a result, the run-time is quicker. Furthermore, the prediction accuracy is improved. It is convenient that researchers go on using CPT+ to predict Webpage access. Findings: Our experimental results indicate that PageRank algorithm is a good solution to improve CPT+ prediction. An amount of though approximately 15 % to 30% of redundant data is removed from datasets while improving the accuracy. Recommendations for Practitioners: The result of the article could be used in developing relevant applications such as Webpage and product recommendation systems. Recommendation for Researchers: The paper provides a prediction model that integrates CPT+ and PageRank algorithms to tackle the problem of complexity and accuracy. The model has been experimented against several real datasets in order to show its performance. Impact on Society: Given an improving model to predict Webpage access using in several fields such as e-learning, product recommendation, link prediction, and user behavior prediction, the society can enjoy a better experience and more efficient environment while surfing the Web. Future Research: We intend to further improve the accuracy of webpage access prediction by using the combination of CPT+ and other algorithms.


Author(s):  
Pavel Šimek ◽  
Jiří Vaněk ◽  
Jan Jarolímek

The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.


Compiler ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 71
Author(s):  
Aris Wahyu Murdiyanto ◽  
Adri Priadana

Keyword research is one of the essential activities in Search Engine Optimization (SEO). One of the techniques in doing keyword research is to find out how many articles titles on a website indexed by the Google search engine contain a particular keyword or so-called "allintitle". Moreover, search engines are also able to provide keywords suggestion. Getting keywords suggestions and allintitle will not be effective, efficient, and economical if done manually for relatively extensive keyword research. It will take a long time to decide whether a keyword is needed to be optimized. Based on these problems, this study aimed to analyze the implementation of the web scraping technique to get relevant keyword suggestions from the Google search engine and the number of "allintitle" that are owned automatically. The data used as an experiment in this test consists of ten keywords, which each keyword would generate a maximum of ten keywords suggestion. Therefore, from ten keywords, it will produce at most 100 keywords suggestions and the number of allintitles. Based on the evaluation result, we got an accuracy of 100%. It indicated that the technique could be applied to get keywords suggestions and allintitle from Google search engines with outstanding accuracy values.


2017 ◽  
Vol 33 (06) ◽  
pp. 665-669 ◽  
Author(s):  
Amar Gupta ◽  
Michael Nissan ◽  
Michael Carron ◽  
Giancarlo Zuliani ◽  
Hani Rayess

AbstractThe Internet is the primary source of information for facial plastic surgery patients. Most patients only analyze information in the first 10 Web sites retrieved. The aim of this study was to determine factors critical for improving Web site traffic and search engine optimization. A Google search of “rhinoplasty” was performed in Michigan. The first 20 distinct Web sites originating from private sources were included. Private was defined as personal Web sites for private practice physicians. The Web sites were evaluated using SEOquake and WooRANK, publicly available programs that analyze Web sites. Factors examined included the presence of social media, the number of distinct pages on the Web site, the traffic to the Web site, use of keywords, such as rhinoplasty in the heading and meta description, average visit duration, traffic coming from search, bounce rate, and the number of advertisements. Readability and Web site quality were also analyzed using the DISCERN and Health on the Net Foundation code principles. The first 10 Web sites were compared with the latter 10 Web sites using Student's t-tests. The first 10 Web sites received a significantly lower portion of traffic from search engines than the second 10 Web sites. The first 10 Web sites also had significantly fewer tags of the keyword “nose” in the meta description of the Web site. The first 10 Web sites were significantly more reliable according to the DISCERN instrument, scoring an average of 2.42 compared with 2.05 for the second 10 Web sites (p = 0.029). Search engine optimization is critical for facial plastic surgeons as it improves online presence. This may potentially result in increased traffic and an increase in patient visits. However, Web sites that rely too heavily on search engines for traffic are less likely to be in the top 10 search results. Web site curators should maintain a wide focus for obtaining Web site traffic, possibly including advertising and publishing information in third party sources such as “RealSelf.”


Author(s):  
Sunny Sharma ◽  
Vijay Rana

: The Existing studies have already revealed that the information on the web is increasing rapidly. Ambiguous queries and user’s ability to express their intention through queries have been one of the key challenges in retrieving the accurate search results from the search engine. This paper in response explored different methodologies proposed during 2005-2019 by the eminent researchers for recommending better search results. Some of these methodologies are based on the users’ geographical location while others rely on re- rank the web results and refinement of user’s query. Fellow researchers can use this literature, to define the fundamental literature for their own work. Further a brief case study of major search engines like Google, Yahoo, Bing etc. along with the techniques used by these search engines for personalization are also discussed. Finally, the paper discusses some current issues and challenges related to the personalization which further lays the future research directions.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 359
Author(s):  
Dr Jkr Sastry ◽  
M Sri Harsha Vamsi ◽  
R Srinivas ◽  
G Yeshwanth

WEB clients use the WEB for searching the content that they are looking for through inputting keywords or snippets as input to the search engines. Search Engines follows a process to collect the content and provide the same as output in terms of URL links. One can observe that only 20% of the outputted URLS are of use to the end user. 80% of output is unnecessarily surfed leading to wastage of money and time. Customers have surfing characteristics which can be collected as the user keep surfing. The search process can be made efficient by including the user characteristics / Behaviors as part and parcel of search process. This paper is aimed at improving the search process through integration of the user behavior in indexing and ranking the web pages.


2020 ◽  
pp. 144-151
Author(s):  
Ngo Le Huy Hien ◽  
Thai Quang Tien ◽  
Nguyen Van Hieu

The World Wide Web is a large, wealthy, and accessible information system whose users are increasing rapidly nowadays. To retrieve information from the web as per users’ requests, search engines are built to access web pages. As search engine systems play a significant role in cybernetics, telecommunication, and physics, many efforts were made to enhance their capacity.However, most of the data contained on the web are unmanaged, making it impossible to access the entire network at once by current search engine system mechanisms. Web Crawler, therefore, is a critical part of search engines to navigate and download full texts of the web pages. Web crawlers may also be applied to detect missing links and for community detection in complex networks and cybernetic systems. However, template-based crawling techniques could not handle the layout diversity of objects from web pages. In this paper, a web crawler module was designed and implemented, attempted to extract article-like contents from 495 websites. It uses a machine learning approach with visual cues, trivial HTML, and text-based features to filter out clutters. The outcomes are promising for extracting article-like contents from websites, contributing to the search engine systems development and future research gears towards proposing higher performance systems.


2020 ◽  
Vol 17 (2) ◽  
pp. 263-291
Author(s):  
Rania Mousa ◽  
Robert Pinsker

Purpose The purpose of this paper is to examine the implementation and development of eXtensible Business Reporting Language (XBRL) at the Federal Deposit Insurance Corporation (FDIC). The investigation seeks to gauge the roles and experiences of the FDIC and its main stakeholders to determine their engagement in XBRL diffusion within their organizations. Design/methodology/approach This is an qualitative research approach that is driven by the use of an in-depth case study and supported by the use of semi-structured interviews. Findings The findings showcase the role played by the FDIC as the first US regulatory authority that implemented and developed Inline XBRL. In addition, the use of diffusion of innovation theory provides better understanding of each stakeholder’s issues, benefits and challenges based on their experience. Research limitations/implications The research does not examine the institutionalization of XBRL at the FDIC or its stakeholders. Therefore, future research could incorporate a different research design to capture the impact of the pressure resulting from the regulatory mandate. Practical implications The research offers practical insights into public information technology managers and policymakers at global government agencies which are either non-adopters of XBRL technology or current adopters and consider transitioning into Inline XBRL. Global stakeholders could learn from the US experience and develop better understanding of Inline XBRL applications and functionalities. Originality/value The originality of this research is driven by the FDIC’s experience as the first regulatory developer of Inline XBRL. As such, the case study is a best practice to future and current adopters who often navigate the nuisance of implementing new technologies and/or developing existing ones.


2017 ◽  
Author(s):  
Jorge Martinez-Gil ◽  
José F. Aldana-Montes

The problem of matching schemas or ontologies consists of providing corresponding entities in two or more knowledge models that belong to a same domain but have been developed separately. Nowadays there are a lot of techniques and tools for addressing this problem, however, the complex nature of the matching problem make existing solutions for real situations not fully satisfactory. The Google Similarity Distance has appeared recently. Its purpose is to mine knowledge from the Web using the Google search engine in order to semantically compare text expressions. Our work consists of developing a software application for validating results discovered by schema and ontology matching tools using the philosophy behind this distance. Moreover, we are interested in using not only Google, but other popular search engines with this similarity distance. The results reveal three main facts. Firstly, some web search engines can help us to validate semantic correspondences satisfactorily. Secondly there are significant differences among the web search engines. And thirdly the best results are obtained when using combinations of the web search engines that we have studied.


2017 ◽  
pp. 030-050
Author(s):  
J.V. Rogushina ◽  

Problems associated with the improve ment of information retrieval for open environment are considered and the need for it’s semantization is grounded. Thecurrent state and prospects of development of semantic search engines that are focused on the Web information resources processing are analysed, the criteria for the classification of such systems are reviewed. In this analysis the significant attention is paid to the semantic search use of ontologies that contain knowledge about the subject area and the search users. The sources of ontological knowledge and methods of their processing for the improvement of the search procedures are considered. Examples of semantic search systems that use structured query languages (eg, SPARQL), lists of keywords and queries in natural language are proposed. Such criteria for the classification of semantic search engines like architecture, coupling, transparency, user context, modification requests, ontology structure, etc. are considered. Different ways of support of semantic and otology based modification of user queries that improve the completeness and accuracy of the search are analyzed. On base of analysis of the properties of existing semantic search engines in terms of these criteria, the areas for further improvement of these systems are selected: the development of metasearch systems, semantic modification of user requests, the determination of an user-acceptable transparency level of the search procedures, flexibility of domain knowledge management tools, increasing productivity and scalability. In addition, the development of means of semantic Web search needs in use of some external knowledge base which contains knowledge about the domain of user information needs, and in providing the users with the ability to independent selection of knowledge that is used in the search process. There is necessary to take into account the history of user interaction with the retrieval system and the search context for personalization of the query results and their ordering in accordance with the user information needs. All these aspects were taken into account in the design and implementation of semantic search engine "MAIPS" that is based on an ontological model of users and resources cooperation into the Web.


Sign in / Sign up

Export Citation Format

Share Document