scholarly journals "DNS SEHAT" Implementation on Automatic Replicated Recursive Domain Name System on PT. X

2019 ◽  
Author(s):  
Muhammad Ilham Verardi Pradana

Thanks to the existence of Search engines, all of informations and datas could be easily found in the internet, one of the search engine that users use the most is Google. Google still be the most popular search engine to provide any informations available on the internet. The search result that Google provide, doesn't always give the result we wanted. Google just displayed the results based on the keyword we type. So sometimes, they show us the negative contents on the internet, such as pornography, pornsites, and many more that seems to be related to the keyword, whether the title or the other that makes the result going that way. In this paper, we will implement the "DNS SEHAT" to pass along client's request queries so the Google search engine on the client's side will provide more relevant search results without any negative contents.

Author(s):  
Novario Jaya Perdana

The accuracy of search result using search engine depends on the keywords that are used. Lack of the information provided on the keywords can lead to reduced accuracy of the search result. This means searching information on the internet is a hard work. In this research, a software has been built to create document keywords sequences. The software uses Google Latent Semantic Distance which can extract relevant information from the document. The information is expressed in the form of specific words sequences which could be used as keyword recommendations in search engines. The result shows that the implementation of the method for creating document keyword recommendation achieved high accuracy and could finds the most relevant information in the top search results.


2019 ◽  
Vol 71 (1) ◽  
pp. 54-71 ◽  
Author(s):  
Artur Strzelecki

Purpose The purpose of this paper is to clarify how many removal requests are made, how often, and who makes these requests, as well as which websites are reported to search engines so they can be removed from the search results. Design/methodology/approach Undertakes a deep analysis of more than 3.2bn removed pages from Google’s search results requested by reporting organizations from 2011 to 2018 and over 460m removed pages from Bing’s search results requested by reporting organizations from 2015 to 2017. The paper focuses on pages that belong to the .pl country coded top-level domain (ccTLD). Findings Although the number of requests to remove data from search results has been growing year on year, fewer URLs have been reported in recent years. Some of the requests are, however, unjustified and are rejected by teams representing the search engines. In terms of reporting copyright violations, one company in particular stands out (AudioLock.Net), accounting for 28.1 percent of all reports sent to Google (the top ten companies combined were responsible for 61.3 percent of the total number of reports). Research limitations/implications As not every request can be published, the study is based only what is publicly available. Also, the data assigned to Poland is only based on the ccTLD domain name (.pl); other domain extensions for Polish internet users were not considered. Originality/value This is first global analysis of data from transparency reports published by search engine companies as prior research has been based on specific notices.


Author(s):  
Chandran M ◽  
Ramani A. V

<p>The research work is about to test the quality of the website and to improve the quality by analyzing the hit counts, impressions, clicks, count through rates and average positions. This is accomplished using WRPA and SEO technique. The quality of the website mainly lies on the keywords which are present in it. The keywords can be of a search query which is typed by the users in the search engines and based on these keywords, the websites are displayed in the search results. This research work concentrates on bringing the particular websites to the first of the search result in the search engine. The website chosen for research is SRKV. The research work is carried out by creating an index array of Meta tags. This array will hold all the Meta tags. All the search keywords for the website from the users are stored in another array. The index array is matched and compared with the search keywords array. From this, hit to count is calculated for the analysis. Now the calculated hit count and the searched keywords will be analyzed to improve the performance of the website. The matched special keywords from the above comparison are included in the Meta tag to improve the performance of the website. Again all the Meta tags and newly specified keywords in the index array are matched with the SEO keywords. If this matches, then the matched keyword will be stored for improving the quality of the website. Metrics such as impressions, clicks, CTR, average positions are also measured along with the hit counts. The research is carried out under different types of browsers and different types of platforms. Queries about the website from different countries are also measured. In conclusion, if the number of the clicks for the website is more than the average number of clicks, then the quality of the website is good. This research helps in improvising the keywords using WRPA and SEO and thereby improves the quality of the website easily.</p>


Author(s):  
Pratik C. Jambhale

Search engine optimization is a technique to take a web document in top search results of a search engine. Web presence Companies is not only an easy way to reach among the target users but it may be profitable for Business is exactly find the target users as of the reason that most of the time user search out with the keywords of their use rather than searching the Company name, and if the Company Website page come in the top positions then the page may be profitable. This work describes the tweaks of taking the page on top position in Google by increasing the Page rank which may result in the improved visibility and profitable deal for a Business. Google is most user-friendly search engine to prove for the all users which give user-oriented results. In addition ,most of other search engines use Google search patterns so we have concentrated on it. So, if a page is Register on Google it Is Display on most of the search engines.


2019 ◽  
Author(s):  
Jingchun Fan ◽  
Jean Craig ◽  
Na Zhao ◽  
Fujian Song

BACKGROUND Increasingly people seek health information from the Internet, in particular, health information on diseases that require intensive self-management, such as diabetes. However, the Internet is largely unregulated and the quality of online health information may not be credible. OBJECTIVE To assess the quality of online information on diabetes identified from the Internet. METHODS We used the single term “diabetes” or equivalent Chinese characters to search Google and Baidu respectively. The first 50 websites retrieved from each of the two search engines were screened for eligibility using pre-determined inclusion and exclusion criteria. Included websites were assessed on four domains: accessibility, content coverage, validity and readability. RESULTS We included 26 websites from Google search engine and 34 from Baidu search engine. There were significant differences in website provider (P<0.0001), but not in targeted population (P=0.832) and publication types (P=0.378), between the two search engines. The website accessibility was not statistically significantly different between the two search engines, although there were significant differences in items regarding website content coverage. There was no statistically significant difference in website validity between the Google and Baidu search engines (mean Discern score 3.3 vs 2.9, p=0.156). The results to appraise readability for English website showed that that Flesch Reading Ease scores ranged from 23.1 to 73.0 and the mean score of Flesch-Kincaid Grade Level ranged range from 5.7 to 19.6. CONCLUSIONS The content coverage of the health information for patients with diabetes in English search engine tended to be more comprehensive than that from Chinese search engine. There was a lack of websites provided by health organisations in China. The quality of online health information for people with diabetes needs to be improved to bridge the knowledge gap between website service and public demand.


2017 ◽  
Vol 4 (6) ◽  
Author(s):  
Surya Eka Priyatna

In the course we are familiar with the Internet network facilities. Of course also familiar with the Google search engine. But we rarely use the other facilities there, which turned out to be very helpful in learning activities. Company Google intelligently integrates most of the processing functions of administrative data in applications, such as mail, drive, document, spreadsheet, slide and form. Such products can we maximize the efficiency and effectiveness of lectures.  Pada perkuliahan kita sudah familiar dengan fasilitas jaringan internet. Sudah tentu juga familiar dengan mesin pencari google. Tetapi kita jarang menggunakan fasilitas-fasilitas lainnya yang ada, yang ternyata sangat membantu dalam kegiatan perkuliahan. Perusahaan Google dengan cerdas mengintegrasikan sebagian besar fungsi-fungsi pengolahan data administratif dalam aplikasinya, seperti mail, drive, document, spreadsheet, slide dan form. Produk-produk tersebut dapat kita maksimalkan untuk efisiensi dan efektifitas perkuliahan.


This paper aims to provide an intelligent way to query and rank the results of a Meta Search Engine. A Meta Search Engine takes input from the user and produces results which are gathered from other search engines. The main advantage of a Meta Search Engine over methodical search engine is its ability to extend the search space and allows more resources for the user. The semantic intelligent queries will be fetching the results from different search engines and the responses will be fed into our ranking algorithm. Ranking of the search results is the other important aspect of Meta search engines. When a user searches a query, there are number of results retrieved from different search engines, but only several results are relevant to user's interest and others are not much relevant. Hence, it is important to rank results according to the relevancy with user query. The proposed paper uses intelligent query and ranking algorithms in order to provide intelligent meta search engine with semantic understanding.


2018 ◽  
Vol 1 (1) ◽  
pp. 15-22
Author(s):  
Diki Arisandi ◽  
Sukri Sukri ◽  
Salamun Salamun ◽  
Roni Salambue

Access to information from the internet is become the thing that is needed by almost all society, and also the student of SMK N II Taluk Kuantan. In this community service, the material that has been given is about how the search results obtained to be more effective by using power searching. Power searching delivered to students of SMK N II Taluk Kuantan is by inserting mathematical symbols on the keywords entered into the search engine on the internet. In addition, the material also discussed about more specific search by combining mathematical symbols, host names and file types to search. The method In this community service was giving the material and demo about how the implementation of power searching on search engines. After this activity is implemented, the community service team evaluates the material that has been given before. The result was the students in SMK N II Taluk Kuantan can implement power searching well.\  


2019 ◽  
Vol 2019 ◽  
Author(s):  
Jan Rensinghoff ◽  
Florian Marius Farke ◽  
Markus Dürmuth ◽  
Tobias Gostomzyk

The new European right to be forgotten (Art. 17 of the European General Data Protection Regulation (GDPR) grants EU citizens the right to demand the erasure of their personal data from anyone who processes their personal data. To enforce this right to erasure may be a problem for many of those data processors. On the one hand, they need to examine any claim to remove search results. On the other hand, they have to balance conflicting rights in order to prevent over-blocking and the accusation of censorship. The paper examines the criteria which are potentially involved in the decision-making process of search engines when it comes to the right to erasure. We present an approach helping search engine operators and individuals to assess and decide whether search results may have to be deleted or not. Our goal is to make this process more transparent and consistent, providing more legal certainty for both the search engine operator and the person concerned by the search result in question. As a result, we develop a model to estimate the chances of success to delete a particular search result for a given person. This is a work in progress.


Author(s):  
Rung Ching Chen ◽  
Ming Yung Tsai ◽  
Chung Hsun Hsieh

In recent years, due to the fast growth of the Internet, the services and information it provides are constantly expanding. Madria and Bhowmick (1999) and Baeza-Yates (2003) indicated that most large search engines need to comply to, on average, at least millions of hits daily in order to satisfy the users’ needs for information. Each search engine has its own sorting policy and the keyword format for the query term, but there are some critical problems. The searches may get more or less information. In the former, the user always gets buried in the information. Requiring only a little information, they always select some former items from the large amount of returned information. In the latter, the user always re-queries using another searching keyword to do searching work. The re-query operation also leads to retrieving information in a great amount, which leads to having a large amount of useless information. That is a bad cycle of information retrieval. The similarity Web page retrieval can help avoid browsing the useless information. The similarity Web page retrieval indicates a Web page, and then compares the page with the other Web pages from the searching results of search engines. The similarity Web page retrieval will allow users to save time by not browsing unrelated Web pages and reject non-similar Web pages, rank the similarity order of Web pages and cluster the similarity Web pages into the same classification.


Sign in / Sign up

Export Citation Format

Share Document