scholarly journals Quantification of competitive value of documents

Author(s):  
Pavel Šimek ◽  
Jiří Vaněk ◽  
Jan Jarolímek

The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.

Author(s):  
Cornelia Gyorodi ◽  
Robert Gyorodi ◽  
George Pecherle ◽  
George Mihai Cornea

In this article we will try to explain how we can create a search engine using the powerful MySQL full-text search. The ever increasing demands of the web requires cheap and elaborate search options. One of the most important issues for a search engine is to have the capacity to order its results set as relevance and provide the user with suggestions in the case of a spelling mistake or a small result set. In order to fulfill this request we thought about using the powerful MySQL full-text search. This option is suitable for small to medium scale websites. In order to provide sound like capabilities, a second table containing a bag of words from the main table together with the corresponding metaphone is created. When a suggestion is needed, this table is interrogated for the metaphone of the searched word and the result set is computed resulting a suggestion.


2017 ◽  
Vol 33 (06) ◽  
pp. 665-669 ◽  
Author(s):  
Amar Gupta ◽  
Michael Nissan ◽  
Michael Carron ◽  
Giancarlo Zuliani ◽  
Hani Rayess

AbstractThe Internet is the primary source of information for facial plastic surgery patients. Most patients only analyze information in the first 10 Web sites retrieved. The aim of this study was to determine factors critical for improving Web site traffic and search engine optimization. A Google search of “rhinoplasty” was performed in Michigan. The first 20 distinct Web sites originating from private sources were included. Private was defined as personal Web sites for private practice physicians. The Web sites were evaluated using SEOquake and WooRANK, publicly available programs that analyze Web sites. Factors examined included the presence of social media, the number of distinct pages on the Web site, the traffic to the Web site, use of keywords, such as rhinoplasty in the heading and meta description, average visit duration, traffic coming from search, bounce rate, and the number of advertisements. Readability and Web site quality were also analyzed using the DISCERN and Health on the Net Foundation code principles. The first 10 Web sites were compared with the latter 10 Web sites using Student's t-tests. The first 10 Web sites received a significantly lower portion of traffic from search engines than the second 10 Web sites. The first 10 Web sites also had significantly fewer tags of the keyword “nose” in the meta description of the Web site. The first 10 Web sites were significantly more reliable according to the DISCERN instrument, scoring an average of 2.42 compared with 2.05 for the second 10 Web sites (p = 0.029). Search engine optimization is critical for facial plastic surgeons as it improves online presence. This may potentially result in increased traffic and an increase in patient visits. However, Web sites that rely too heavily on search engines for traffic are less likely to be in the top 10 search results. Web site curators should maintain a wide focus for obtaining Web site traffic, possibly including advertising and publishing information in third party sources such as “RealSelf.”


2012 ◽  
Vol 532-533 ◽  
pp. 1282-1286
Author(s):  
Zhi Chao Lin ◽  
Lei Sun ◽  
Xiao Liu

There is a lot of information contained in the World Wide Web. It has become a research focus to obtain the required related resources quickly and accurately from the web through the content-based search engines. Most current tools of full text web search engine, such as Lucene which is a widely used open source retrieval library in information retrieval field, are purely keyword based. This may not sufficient for users to retrieve in the web. In this paper, we employ a method to overcome the limitations of current full text search engines in represent of Lucene. We propose a Query Expansion and Information Retrieval approach which can help users to acquire more accurate contents from the web. The Query Expansion component finds expanded candidate words of the query word through WordNet which contains synonyms in several different senses; In the Information Retrieval component, the query word and its candidate words are used together as the input of the search module to get the result items. Furthermore, we can put the result items into different classes based on the expansion. Some experiments and the results are described in the late part of this paper.


Author(s):  
Pat Case

The Web changed the paradigm for full-text search. Searching Google for search engines returns 57,300,000 results at this writing, an impressive result set. Web search engines favor simple searches, speed, and relevance ranking. The end user most often finds a wanted result or two within the first page of search results. This new paradigm is less useful in searching collections of homogeneous data and documents than it is for searching the web. When searching collections end users may need to review everything in the collection on a topic, or may want a clean result set of only those 6 high-quality results, or may need to confirm that there are no wanted results because finding no results within a collection sometimes answers a question about a topic or collection. To accomplish these tasks, end users may need more end user functionality to return small, manageable result sets. The W3C XQuery and XPath Full Text Recommendation (XQFT) offers extensive end user functionality, restoring the end user control that librarians and expert searches enjoyed before the Web. XQFT offers more end user functionality and control than any other full-text search standard ever: more match options, more logical operators, more proximity operators, more ways to return a manageable result set. XQFT searches are also completely composable with XQuery string, number, date, and node queries, bringing the power of full-text search and database querying together for the first time. XQFT searches run directly against XML, enabling searches on any elements or attributes. XQFT implementations are standard-driven, based on shared semantics and syntax. A search in any implementation is portable and may be used in other implementations.


2012 ◽  
Vol 02 (04) ◽  
pp. 106-109 ◽  
Author(s):  
Rujia Gao ◽  
Danying Li ◽  
Wanlong Li ◽  
Yaze Dong

Author(s):  
Mary Holstege

To a search engine, indexes are specified by the content: the words, phrases, and characters that are actually present tell the search engine what inverted indexes to create. Other external knowledge can be applied add to this inventory of indexes. For example, knowledge of the document language can lead to indexes for word stems or decompounding. These can unify different content into the same index or split the same content into multiple indexes. That is, different words manifest in the content can be unified under a single search key, and the same word can have multiple manifestations under different search keys. Turning this around, the indexes represent the retrievable information content in the document. Full text search is not an either/or yes/no system, but one of relative fit (scoring). Precision balances against recall, mediated by scoring. The search engine perspective offers a different way to think about markup: As a specification of the retrievable information content of the document. As something that can, with additional information, unify different markup or provide multiple distinct views of the same markup. As something that can be present to greater or lesser degrees, with a goodness of match (scoring). As a specification that can be adjusted to balance precision and recall. What does this search engine perspective on markup mean, concretely? Can we use it to reframe some persistent conundrums, such as vocabulary resolution and overlap? Let's see.


Compiler ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 71
Author(s):  
Aris Wahyu Murdiyanto ◽  
Adri Priadana

Keyword research is one of the essential activities in Search Engine Optimization (SEO). One of the techniques in doing keyword research is to find out how many articles titles on a website indexed by the Google search engine contain a particular keyword or so-called "allintitle". Moreover, search engines are also able to provide keywords suggestion. Getting keywords suggestions and allintitle will not be effective, efficient, and economical if done manually for relatively extensive keyword research. It will take a long time to decide whether a keyword is needed to be optimized. Based on these problems, this study aimed to analyze the implementation of the web scraping technique to get relevant keyword suggestions from the Google search engine and the number of "allintitle" that are owned automatically. The data used as an experiment in this test consists of ten keywords, which each keyword would generate a maximum of ten keywords suggestion. Therefore, from ten keywords, it will produce at most 100 keywords suggestions and the number of allintitles. Based on the evaluation result, we got an accuracy of 100%. It indicated that the technique could be applied to get keywords suggestions and allintitle from Google search engines with outstanding accuracy values.


2011 ◽  
Vol 21 (2) ◽  
pp. 191-196
Author(s):  
Tatsuma KAWANAKA ◽  
WATAGAMI Yukiharu ◽  
Takehiko MURAKAWA ◽  
Masaru NAKAGAWA

Sign in / Sign up

Export Citation Format

Share Document