The Dark Web
Latest Publications


TOTAL DOCUMENTS

16
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781522531630, 9781522531647

The Dark Web ◽  
2018 ◽  
pp. 359-374
Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

ICT plays a vital role in human development through information extraction and includes computer networks and telecommunication networks. One of the important modules of ICT is computer networks, which are the backbone of the World Wide Web (WWW). Search engines are computer programs that browse and extract information from the WWW in a systematic and automatic manner. This paper examines the three main components of search engines: Extractor, a web crawler which starts with a URL; Analyzer, an indexer that processes words on the web page and stores the resulting index in a database; and Interface Generator, a query handler that understands the need and preferences of the user. This paper concentrates on the information available on the surface web through general web pages and the hidden information behind the query interface, called deep web. This paper emphasizes the Extraction of relevant information to generate the preferred content for the user as the first result of his or her search query. This paper discusses the aspect of deep web with analysis of a few existing deep web search engines.


The Dark Web ◽  
2018 ◽  
pp. 334-358
Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

A traditional crawler picks up a URL, retrieves the corresponding page and extracts various links, adding them to the queue. A deep Web crawler, after adding links to the queue, checks for forms. If forms are present, it processes them and retrieves the required information. Various techniques have been proposed for crawling deep Web information, but much remains undiscovered. In this paper, the authors analyze and compare important deep Web information crawling techniques to find their relative limitations and advantages. To minimize limitations of existing deep Web crawlers, a novel architecture is proposed based on QIIIEP specifications (Sharma & Sharma, 2009). The proposed architecture is cost effective and has features of privatized search and general search for deep Web data hidden behind html forms.


The Dark Web ◽  
2018 ◽  
pp. 175-198 ◽  
Author(s):  
Tomasz Kaczmarek ◽  
Dawid Grzegorz Węckowski

Acquiring the data from the deep Web is a complex process, which requires understanding of Website navigation issues, data extraction, and integration techniques. Currently existing solutions to automate it are not ready to cover the whole deep Web and require skills and knowledge to be applied in practice. However, several systems were created, which approach the problem by involving end users who are able to bring the data from the deep Web to the surface while creating solutions for their own information needs. The authors study these systems in the chapter from the end user perspective, investigating their interfaces, languages that they expose to end users, and the platforms that accompany the systems to involve end users and allow them to share the results of their work.


The Dark Web ◽  
2018 ◽  
pp. 114-137
Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

Web crawlers specialize in downloading web content and analyzing and indexing from surface web, consisting of interlinked HTML pages. Web crawlers have limitations if the data is behind the query interface. Response depends on the querying party's context in order to engage in dialogue and negotiate for the information. In this paper, the authors discuss deep web searching techniques. A survey of technical literature on deep web searching contributes to the development of a general framework. Existing frameworks and mechanisms of present web crawlers are taxonomically classified into four steps and analyzed to find limitations in searching the deep web.


The Dark Web ◽  
2018 ◽  
pp. 290-317
Author(s):  
Maria-Carolina Cambre

In a new global topography of cultural movements, repressed layers of populations come to historical consciousness and demand autonomy and sovereignty: many are finding ways to engage through online communities. In the wake of rapid global and social change, groups increasingly organized and operated independently of the control and planning of states are taking shape. Elaborating these so-called “processes” as manifested by those behind Guy Fawkes's mask is a key concern in this study. The author builds theoretical insights on the shifting semiotic vocabulary of the Guy Fawkes Mask used by the niche online community of Anonymous as a disruptive insertion of online visual communication.


The Dark Web ◽  
2018 ◽  
pp. 51-63
Author(s):  
Jean-Loup Richet

The main purpose of this chapter is to illustrate a landscape of current literature in cybercrime taking into consideration diffusion of innovation theories and economic theory of competition. In this chapter, a narrative review of the literature was carried out, facilitators leading to cybercrime were explored and explained the diffusion of Cybercriminals' best practices. Cybercrime is compatible with young adults lifestyle (familiarity) and requires little knowledge. Moreover, barriers to entry related to costs (psychological, financial), risks and investments are low. This review provides a snapshot and reference base for academics and practitioners with an interest in cybercrime while contributing to a cumulative culture which is desired in the field. This chapter provides insights into barriers to entry into cybercrime and the facilitators of cybercrime.


The Dark Web ◽  
2018 ◽  
pp. 319-333
Author(s):  
Sudhakar Ranjan ◽  
Komal Kumar Bhatia

Now days with the advent of internet technologies and ecommerce the need for smart search engine for human life is rising. The traditional search engines are not intelligent as well as smart and thus lead to the rise in searching costs. In this paper, architecture of a vertical search engine based on the domain specific hidden web crawler is proposed. To make a least cost vertical search engine improvement in the following techniques like: searching, indexing, ranking, transaction and query interface are suggested. The domain term analyzer filters the useless information to the maximum extent and finally provides the users with high precision information. Through the experimental result it is shown that the system works on accelerating the access, computation, storage, communication time, increased efficiency and work professionally.


The Dark Web ◽  
2018 ◽  
pp. 199-226 ◽  
Author(s):  
B. Umamageswari ◽  
R. Kalpana

Web mining is done on huge amounts of data extracted from WWW. Many researchers have developed several state-of-the-art approaches for web data extraction. So far in the literature, the focus is mainly on the techniques used for data region extraction. Applications which are fed with the extracted data, require fetching data spread across multiple web pages which should be crawled automatically. For this to happen, we need to extract not only data regions, but also the navigation links. Data extraction techniques are designed for specific HTML tags; which questions their universal applicability for carrying out information extraction from differently formatted web pages. This chapter focuses on various web data extraction techniques available for different kinds of data rich pages, classification of web data extraction techniques and comparison of those techniques across many useful dimensions.


The Dark Web ◽  
2018 ◽  
pp. 138-174
Author(s):  
Adelaide Maria de Souza Antunes ◽  
Flavia Maria Lins Mendes ◽  
Suzanne de Oliveira Rodrigues Schumacher ◽  
Luc Quoniam ◽  
Jorge Lima de Magalhães

In response to the challenges of the 21st century, emerging countries have played an increasingly leading role in the global economy, and public health has been a notable feature of the government agendas in these countries. According to the IMF, Brazil is one of the countries with the greatest potential to stand out in this context. The quantity of research and development into technologies for drugs and medications is important for supporting innovation in the health sector. Information science can therefore help considerably in the analysis of patents, indicating trends, revealing opportunities for investors, and assisting the decision-taking process by private sector managers and government agents. This study is based on the extraction of valuable information contained in the hidden Web through technology foresight of products deemed strategic by the Brazilian Ministry of Heath, which are the target of public policies and investments by the state for domestic production.


The Dark Web ◽  
2018 ◽  
pp. 84-113
Author(s):  
Manuel Álvarez Díaz ◽  
Víctor Manuel Prieto Álvarez ◽  
Fidel Cacheda Seijo
Keyword(s):  

This paper presents an analysis of the most important features of the Web and its evolution and implications on the tools that traverse it to index its content to be searched later. It is important to remark that some of these features of the Web make a quite large subset to remain “hidden”. The analysis of the Web focuses on a snapshot of the Global Web for six different years: 2009 to 2014. The results for each year are analyzed independently and together to facilitate the analysis of both the features at any given time and the changes between the different analyzed years. The objective of the analysis are twofold: to characterize the Web and more importantly, its evolution along the time.


Sign in / Sign up

Export Citation Format

Share Document