scholarly journals KEYWORD ANALYSIS WITH USING STATISTICAL METHODS

Author(s):  
Dominika Krasňaská ◽  
◽  
Mária Vojtková ◽  

There are currently more than a billion websites worldwide. In so many websites, everyone wants to be visible to search engines through the keywords that people search for. The article deals with the process of creating keywords, through which we can identify the intention of the searcher. The process of creating keywords consists of several steps, namely the collection of keywords, subsequent cleaning of keywords, their categorization and the last step is the interpretation of keywords. The paper focuses mainly on the categorization of keywords, which we obtain through the use of statistical methods, which includes a method of visualizing relationships between keywords by determining the strength of the association between words called concept linking or term map.

Author(s):  
B. J. Jansen ◽  
A. Spink

People are now confronted with the task of locating electronic information needed to address the issues of their daily lives. The Web is presently the major information source for many people in the U.S. (Cole, Suman, Schramm, Lunn, & Aquino, 2003), used more than newspapers, magazines, and television as a source of information. Americans are expanding their use of the Web for all sorts of information and commercial purposes (Horrigan, 2004; Horrigan & Rainie, 2002; National Telecommunications and Information Administration, 2002). Searching for information is one of the most popular Web activities, second only to the use of e-mail (Nielsen Media, 1997). However, successfully locating needed information remains a difficult and challenging task (Eastman & Jansen, 2003). Locating relevant information not only affects individuals but also commercial, educational, and governmental organizations. This is especially true in regards to people interacting with their governmental agencies. Executive Order 13011 (Clinton, 1996) directed the U.S. federal government to move aggressively with strategies to utilize the Internet. Birdsell and Muzzio (1999) present the growing presence of governmental Web sites, classifying them into three general categories, (1) provision of information, (2) delivery of forms, and (3) transactions. In 2004, 29% of American said they visited a government Web site to contact some governmental entity, 18% sent an e-mail and 22% use multiple means (Horrigan, 2004). It seems clear that the Web is a major conduit for accessing governmental information and maybe services. Search engines are the primary means for people to locate Web sites (Nielsen Media, 1997). Given the Web’s importance, we need to understand how Web search engines perform (Lawrence & Giles, 1998) and how people use and interact with Web search engines to locate governmental information. Examining Web searching for governmental information is an important area of research with the potential to increase our understanding of users of Web-based governmental information, advance our knowledge of Web searchers’ governmental information needs, and positively impact the design of Web search engines and sites that specialize in governmental information.


TEM Journal ◽  
2021 ◽  
pp. 1377-1384
Author(s):  
Dominika Krasňanská ◽  
Silvia Komara ◽  
Mária Vojtková

Keyword analysis is a way to gain insight into market behaviour. It is a detailed analysis of words and phrases that are relevant to the selected area. Keyword analysis should be the first step in any search engine optimization, as it reveals what keywords users enter into search engines when searching the Internet. The keyword categorization process takes up almost half of the total analysis time, as it is not automated. There is currently no known tool in the online advertising market that facilitates keyword categorization. The main goal of this paper is to streamline the process of keyword analysis using selected statistical methods of machine learning applied in the categorization of a specific example.


2019 ◽  
Vol 2019 (54) ◽  
pp. 191-229
Author(s):  
Daniel Mider

The text consists of two parts. The first analyzed the internet search techniques – in a general heuristic sense and detailed i.e. specific techniques belonging to the family of query languages – so called operators (logical operators, localization operators, operators for communication channels in social media, chronometric operators, search operators in the content of the www and search operators for specific types of content). Their main function is to clarify search queries. The second part of the text contains a review and analysis of selected internet exploration tools – search engines (global search engines, search engines focusing on user privacy, metasearch engines and multisearch engines, regional search engines and catalogues, people search engines, search engines of gray literature and internet archives). Preview is not exhaustive or deepened, it serves rather the initial orientation of those interested in the search engine universe.


Author(s):  
Francesca Carmagnola ◽  
Francesco Osborne ◽  
Ilaria Torre

In this chapter, the authors analyze the risks for privacy coming from the distribution of user data over several social networks. Specifically, they focus on risks concerning the possibility to aggregate user data discovered on different sources into a single more complete profile, which makes possible to infer other data, likely set as private by the user. In order to show how it is possible to human users as well as to software agents crawling social networks, identifying users, linking their profiles and aggregating their data, the authors describe the prototype of a search engine they developed. The authors also present a simulation analysis to show the retrievability of user data by using a combination of people search engines and they provide statistics on the user perception on this issue.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


1973 ◽  
Vol 18 (11) ◽  
pp. 562-562
Author(s):  
B. J. WINER
Keyword(s):  

1996 ◽  
Vol 41 (12) ◽  
pp. 1224-1224
Author(s):  
Terri Gullickson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document