Dark Web

Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.

2018 ◽  
pp. 742-748
Author(s):  
Viveka Vardhan Jumpala

The Internet, which is an information super high way, has practically compressed the world into a cyber colony through various networks and other Internets. The development of the Internet and the emergence of the World Wide Web (WWW) as common vehicle for communication and instantaneous access to search engines and databases. Search Engine is designed to facilitate search for information on the WWW. Search Engines are essentially the tools that help in finding required information on the web quickly in an organized manner. Different search engines do the same job in different ways thus giving different results for the same query. Search Strategies are the new trend on the Web.


Author(s):  
Prabhjyot Kaur ◽  
Puneet Kumar Kaushal

In an increasingly intrusive era, the deep web is considered to be a bastion of privacy, while for others it is one of the most evil places to exist. The deep web is a part of the world wide web that cannot be readily accessed by conventional search engines, and a small part of it forms the dark web. The enshrouded nature and complex methodology required for access have made dark web a platform for carrying out numerous illicit activities, one of them being drug trafficking. This article explores how deep web and dark web operate, the trade of illegal drugs online in so-called cryptomarkets or dark net markets, the foundation and shut-down of Silk Road, the new cryptomarkets that popped up to take its place, the use of ‘the onion router' or TOR browser in this anonymous sale and purchase of illegal drugs, the role of encryption and cryptocurrencies, the existing and suggested tactics of the law enforcement to prevent internet-facilitated drug trafficking, and the future research areas.


Author(s):  
Olfa Nasraoui

The Web information age has brought a dramatic increase in the sheer amount of information (Web content), in the access to this information (Web usage), and in the intricate complexities governing the relationships within this information (Web structure). Hence, not surprisingly, information overload when searching and browsing the World Wide Web (WWW) has become the plague du jour. One of the most promising and potent remedies against this plague comes in the form of personalization. Personalization aims to customize the interactions on a Web site, depending on the user’s explicit and/or implicit interests and desires.


Author(s):  
G. Sreedhar

In the present day scenario the World Wide Web (WWW) is an important and popular information search tool. It provides convenient access to almost all kinds of information – from education to entertainment. The main objective of the chapter is to retrieve information from websites and then use the information for website quality analysis. In this chapter information of the website is retrieved through web mining process. Web mining is the process is the integration of three knowledge domains: Web Content Mining, Web Structure Mining and Web Usage Mining. Web content mining is the process of extracting knowledge from the content of web documents. Web structure mining is the process of inferring knowledge from the World Wide Web organization and links between references and referents in the Web. The web content elements are used to derive functionality and usability of the website. The Web Component elements are used to find the performance of the website. The website structural elements are used to find the complexity and usability of the website. The quality assurance techniques for web applications generally focus on the prevention of web failure or the reduction of chances for such failures. The web failures are defined as the inability to obtain or deliver information such as documents or computational results requested by web users. A high quality website is one that provides relevant, useful content and a good user experience. Thus in this chapter, all areas of website are thoroughly studied for analysing the quality of website design.


Author(s):  
Viveka Vardhan Jumpala

The Internet, which is an information super high way, has practically compressed the world into a cyber colony through various networks and other Internets. The development of the Internet and the emergence of the World Wide Web (WWW) as common vehicle for communication and instantaneous access to search engines and databases. Search Engine is designed to facilitate search for information on the WWW. Search Engines are essentially the tools that help in finding required information on the web quickly in an organized manner. Different search engines do the same job in different ways thus giving different results for the same query. Search Strategies are the new trend on the Web.


Author(s):  
Leo Tan Wee Hin

The World Wide Web represents one of the most profound developments that has accompanied the evolution of the Internet. It is truly a global library. Information on the Web is increasing exponentially, and mechanisms to extract information from it have become an engaging field of research. While search engines have been doing an admirable job in finding information, the emergence of Web portals has also been a useful development—their distinct advantage lies in their positioning as a one-stop destination for information and services of a particular nature.


2020 ◽  
Vol 25 (2) ◽  
pp. 1-16
Author(s):  
Rasha Hany Salman ◽  
Mahmood Zaki ◽  
Nadia A. Shiltag

The web today has become an archive of information in any structure such content, sound, video, designs, and multimedia, with the progression of time overall web, the world wide web is now crowded with different data making extraction of virtual data burdensome process, web utilizes various information mining strategies to mine helpful information from page substance and web hyperlink. The fundamental employments of web content mining are to gather, sort out, classify, providing the best data accessible on the web for the client who needs to get it. The WCM tools are needful to examining some HTML reports, content and pictures at that point, the outcome is using by the web engine. This paper displays an overview of web mining categorization, web content technique and critical review and study of web content mining tools since (2011-2019) by building the table's a comparison of these instruments dependent on some important criteria


Author(s):  
Ramanujam Elangovan

The deep web (also called deepnet, the invisible web, dark web, or the hidden web) refers to world wide web content that is not part of the surface web, which is indexed by standard search engines. The more familiar “surface” web contains only a small fraction of the information available on the internet. The deep web contains much of the valuable data on the web, but is largely invisible to standard web crawling techniques. Besides it being the huge source of information, it also provides the rostrum for cybercrime like by providing download links for movies, music, games, etc. without having their copyrights. This article aims to provide context and policy recommendations pertaining to the dark web. The dark web's complete history, from its creation to the latest incidents and the way to access and their sub forums are briefly discussed with respective to the user perspective.


Author(s):  
Reinaldo Padilha França ◽  
Ana Carolina Borges Monteiro ◽  
Rangel Arthur ◽  
Yuzo Iano

Web 2.0 is the evolution of the web. Seen as a new and second movement of access to information through the world wide web, Web 2.0 brings interactivity and collaboration as the main keys to its functioning. It is now possible and simpler and faster to send information at any time, by any user connected to the internet. The ease of uploading information, images, and videos on the Web 2.0 is due to the expansion of resources and codes, allowing anyone to be able to act naturally and take their own content to the internet. As the data and information shared daily is almost infinite, the search engines act even more intuitively and bring only results tailored to each user. Therefore, this chapter aims to provide an updated review and overview of Web 2.0, addressing its evolution and fundamental concepts, showing its relationship, as well as approaching its success with a concise bibliographic background, categorizing and synthesizing the potential of technology.


10.28945/2972 ◽  
2006 ◽  
Author(s):  
Martin Eayrs

The World Wide Web provides a wealth of information - indeed, perhaps more than can comfortably be processed. But how does all that Web content get there? And how can users assess the accuracy and authenticity of what they find? This paper will look at some of the problems of using the Internet as a resource and suggest criteria both for researching and for systematic and critical evaluation of what users find there.


Sign in / Sign up

Export Citation Format

Share Document