scholarly journals Digital hajj: the pilgrimage to Mecca in Muslim cyberspace and the issue of religious online authority

2013 ◽  
Vol 25 ◽  
pp. 189-203 ◽  
Author(s):  
Dominik Schlosser

This paper attempts to give an overview of the different representations of the pilgrimage to Mecca found in the ‘liminal space’ of the internet. For that purpose, it examines a handful of emblematic examples of how the hajj is being presented and discussed in cyberspace. Thereby, special attention shall be paid to the question of how far issues of religious authority are manifest on these websites, whether the content providers of web pages appoint themselves as authorities by scrutinizing established views of the fifth pillar of Islam, or if they upload already printed texts onto their sites in order to reiterate normative notions of the pilgrimage to Mecca, or of they make use of search engine optimisation techniques, thus heightening the very visibility of their online presence and increasing the possibility of becoming authoritative in shaping internet surfers’ perceptions of the hajj.

2016 ◽  
Vol 6 (2) ◽  
pp. 41-65 ◽  
Author(s):  
Sheetal A. Takale ◽  
Prakash J. Kulkarni ◽  
Sahil K. Shah

Information available on the internet is huge, diverse and dynamic. Current Search Engine is doing the task of intelligent help to the users of the internet. For a query, it provides a listing of best matching or relevant web pages. However, information for the query is often spread across multiple pages which are returned by the search engine. This degrades the quality of search results. So, the search engines are drowning in information, but starving for knowledge. Here, we present a query focused extractive summarization of search engine results. We propose a two level summarization process: identification of relevant theme clusters, and selection of top ranking sentences to form summarized result for user query. A new approach to semantic similarity computation using semantic roles and semantic meaning is proposed. Document clustering is effectively achieved by application of MDL principle and sentence clustering and ranking is done by using SNMF. Experiments conducted demonstrate the effectiveness of system in semantic text understanding, document clustering and summarization.


A web crawler is also called spider. For the intention of web indexing it automatically searches on the WWW. As the W3 is increasing day by day, globally the number of web pages grown massively. To make the search sociable for users, searching engine are mandatory. So to discover the particular data from the WWW search engines are operated. It would be almost challenging for mankind devoid of search engines to find anything from the web unless and until he identifies a particular URL address. A central depository of HTML documents in indexed form is sustained by every search Engine. Every time an operator gives the inquiry, searching is done at the database of indexed web pages. The size of a database of every search engine depends on the existing page on the internet. So to increase the proficiency of search engines, it is permitted to store only the most relevant and significant pages in the database.


Author(s):  
Jos van Iwaarden ◽  
Ton van der Wiele ◽  
Roger Williams ◽  
Steve Eldridge

The Internet has come of age as a global source of information about every topic imaginable. A company like Google has become a household name in Western countries and making use of its internet search engine is so popular that “Googling” has even become a verb in many Western languages. Whether it is for business or private purposes, people worldwide rely on Google to present them relevant information. Even the scientific community is increasingly employing Google’s search engine to find academic articles and other sources of information about the topics they are studying. Yet, the vast amount of information that is available on the internet is gradually changing in nature. Initially, information would be uploaded by the administrators of the web site and would then be visible to all visitors of the site. This approach meant that web sites tended to be limited in the amount of content they provided, and that such content was strictly controlled by the administrators. Over time, web sites have granted their users the authority to add information to web pages, and sometimes even to alter existing information. Current examples of such web sites are eBay (auction), Wikipedia (encyclopedia), YouTube (video sharing), LinkedIn (social networking), Blogger (weblogs) and Delicious (social bookmarking).


Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


2018 ◽  
Vol 8 (3) ◽  
pp. 52-70
Author(s):  
Edwin Mwosa Kivuti

This article describes how search engine optimization(SEO) is becoming an increasing useful technique in online marketing as more people look to the internet to search for information. Search Engine Optimization enables web developers to develop web pages which have a high SERP rankings. A key technique in SEO is through improving the keyword prominence of keywords. In this research, keywords within web pages are extracted, and the correlation between the frequency of these words and their keyword prominence is evaluated. The findings of this research will provide a guideline to SEO practitioners, in that they will have a better understanding of the ratio of keywords they need in to add to web pages in relation to the rest of the content.


2014 ◽  
Vol 2 (2) ◽  
pp. 103-112 ◽  
Author(s):  
Taposh Kumar Neogy ◽  
Harish Paruchuri

The essence of a web page is an inherently predisposed issue, one that is built on behaviors, interests, and intelligence. There are relatively a ton of reasons web pages are critical to the new world, as the matter cannot be overemphasized. The meteoric growth of the internet is one of the most potent factors making it hard for search engines to provide actionable results. With classified directories, search engines store web pages. To store these pages, some of the engines rely on the expertise of real people. Most of them are enabled and classified using automated means but the human factor is dominant in their success. From experimental results, we can deduce that the most effective and critical way to automate web pages for search engines is via the integration of machine learning.  


Author(s):  
Rung Ching Chen ◽  
Ming Yung Tsai ◽  
Chung Hsun Hsieh

In recent years, due to the fast growth of the Internet, the services and information it provides are constantly expanding. Madria and Bhowmick (1999) and Baeza-Yates (2003) indicated that most large search engines need to comply to, on average, at least millions of hits daily in order to satisfy the users’ needs for information. Each search engine has its own sorting policy and the keyword format for the query term, but there are some critical problems. The searches may get more or less information. In the former, the user always gets buried in the information. Requiring only a little information, they always select some former items from the large amount of returned information. In the latter, the user always re-queries using another searching keyword to do searching work. The re-query operation also leads to retrieving information in a great amount, which leads to having a large amount of useless information. That is a bad cycle of information retrieval. The similarity Web page retrieval can help avoid browsing the useless information. The similarity Web page retrieval indicates a Web page, and then compares the page with the other Web pages from the searching results of search engines. The similarity Web page retrieval will allow users to save time by not browsing unrelated Web pages and reject non-similar Web pages, rank the similarity order of Web pages and cluster the similarity Web pages into the same classification.


2012 ◽  
Vol 532-533 ◽  
pp. 1752-1756 ◽  
Author(s):  
Jun Ya Yan ◽  
Xiao Hui Ma ◽  
Wen Juan Zhao

The development of the internet and exponential growth of network information produce a large number of duplicated pages on the network, reducing the retrieval of recall and precision and affecting the retrieval efficiency. The accuracy of the web, therefore, influences the quality of search engine. On the basis of the structural text description, this paper proposes an improved eliminating repetitive algorithm method, which is based on MD5 of Near-replicas. It proves that the method has a good effect on improving the recall and the precision through experiment.


Author(s):  
Jose Triny K, Et. al.

Web pages have an increasing number of been used because thepatron interface of many software programsoftwarestructures. The simplicity of interplay with internet pages is an idealbenefit of the usage of them. However, the character interface also can get extracomplicatedwhilegreatercomplexnet pages are used to construct it. Understanding the complexity of net pages as perceived subjectively with the resource of clients is thereforecrucial to betterlayout this sort ofconsumer interface. Searching is one of thenot unusual placeassignmentachievedon the Internet. Search engines are the essentialtool of the net, from whereinyou willcollectassociatedstatistics and searched in keeping with the favoredkey-word given by the character. The recordson theinternet is developing dramatically. The consumer has to spend extra time with inside theinternetin case youneed to find outthe correctfactsthey may befascinated in. Existing net engines like Google do now no longerundergo in thoughtsuniqueneeds of character and serve eachpatron similarly. For this ambiguous query, some offiles on wonderfulsubjects are decreaselower backby engines like Google. Hence it will becomedifficult for the consumer to get the requiredcontent materialfabric. Moreover it additionally takes extra time in searching a pertinent content materialfabric. In this paper, we are able to survey the numerous algorithms for decreasing complexity in internetweb page navigations.


2019 ◽  
Vol 1 (2) ◽  
Author(s):  
Yu Hou ◽  
Lixin Tao

As the tsunami of data has emerged, search engines have become the most powerful tool for obtaining scattered information on the internet. The traditional search engines return the organized results by using ranking algorithm such as term frequency, link analysis (PageRank algorithm and HITS algorithm) etc. However, these algorithms must combine the keyword frequency to determine the relevance between user’s query and the data in the computer system or internet. Moreover, we expect the search engines could understand users’ searching by content meanings rather than literal strings. Semantic Web is an intelligent network and it could understand human’s language more semantically and make the communication easier between human and computers. But, the current technology for the semantic search is hard to apply. Because some meta data should be annotated to each web pages, then the search engine will have the ability to understand the users intend. However, annotate every web page is very time-consuming and leads to inefficiency. So, this study designed an ontology-based approach to improve the current traditional keyword-based search and emulate the effects of semantic search. And let the search engine can understand users more semantically when it gets the knowledge.


Sign in / Sign up

Export Citation Format

Share Document