Comparing DBpedia, Wikidata, and YAGO for Web Information Retrieval

Author(s):  
Sini Govinda Pillai ◽  
Lay-Ki Soon ◽  
Su-Cheng Haw
2013 ◽  
Vol 76 (1) ◽  
pp. 29-32
Author(s):  
Vikas Thada ◽  
Vivek Jaglan

The Dark Web ◽  
2018 ◽  
pp. 114-137
Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

Web crawlers specialize in downloading web content and analyzing and indexing from surface web, consisting of interlinked HTML pages. Web crawlers have limitations if the data is behind the query interface. Response depends on the querying party's context in order to engage in dialogue and negotiate for the information. In this paper, the authors discuss deep web searching techniques. A survey of technical literature on deep web searching contributes to the development of a general framework. Existing frameworks and mechanisms of present web crawlers are taxonomically classified into four steps and analyzed to find limitations in searching the deep web.


Sign in / Sign up

Export Citation Format

Share Document