A Semantic Query Method for Deep Web

2013 ◽  
Vol 347-350 ◽  
pp. 2559-2563
Author(s):  
Hao Jiang ◽  
Wen Ju Liu ◽  
Li Li Lu

Based on the idea of "functionality-centric", this paper proposes a complete set of oriented semantic query methods for Deep Web, builds up the relevant software architecture, provides a new method for full use of Deep Web data resources in semantic web environment through describing the establishment of the semantic environment, re-writing the SPARQL-to-SQL query, semantic packaging of semantic query result, and the architecture of semantic query services.

2010 ◽  
Vol 20-23 ◽  
pp. 553-558 ◽  
Author(s):  
Ke Rui Chen ◽  
Wan Li Zuo ◽  
Fan Zhang ◽  
Feng Lin He

With the rapid increasing of web data, deep web is the fastest growing web data carrier. Therefore, the research of deep web, especially on extracting data records from Result pages, has already become an urgent task. We present a data records extraction based on Global Schema method, which automatically extracts the query result records from web pages. This method first analyzes the Query interface and result records instances to build a Global Schema by ontology. Then, the Global Schema is used in the process of extracting data records from result pages and storing these data in a table. Experimental results indicate that this method is accurate to extract data records, as well as to save in a table with a Global Schema.


2009 ◽  
Vol 20 (11) ◽  
pp. 2950-2964 ◽  
Author(s):  
Xiao-Yong DU ◽  
Yan WANG ◽  
Bin LÜ

Author(s):  
Dilip Kumar Sharma ◽  
A. K. Sharma

A traditional crawler picks up a URL, retrieves the corresponding page and extracts various links, adding them to the queue. A deep Web crawler, after adding links to the queue, checks for forms. If forms are present, it processes them and retrieves the required information. Various techniques have been proposed for crawling deep Web information, but much remains undiscovered. In this paper, the authors analyze and compare important deep Web information crawling techniques to find their relative limitations and advantages. To minimize limitations of existing deep Web crawlers, a novel architecture is proposed based on QIIIEP specifications (Sharma & Sharma, 2009). The proposed architecture is cost effective and has features of privatized search and general search for deep Web data hidden behind html forms.


2012 ◽  
Vol 532-533 ◽  
pp. 1263-1267
Author(s):  
Hui Jia ◽  
Guo Hua Geng ◽  
Jin Xia Yang

This paper presented a new method to construct semantic web of three-dimension model database based on ontology. Firstly we build ontology of three-dimension model database, according the model to extract classes, objects and attributes. Secondly utilize WordNet which is an English ontology to expand original ontology node to semantic extension node, including synonym, hypernym, hyponym and holonym. Experiment result shows that this method not only effectively expands the semantic vocabularies of a 3D model database, but also keeps good semantic relevance of the expanded vocabularies to the original ones, so as to achieve semantic based 3D model retrieval effectively.


Author(s):  
Naima El Ghandour ◽  
Moussa Benaissa ◽  
Yahia Lebbah

The Semantic Web uses ontologies to cope with the data heterogeneity problem. However, ontologies become themselves heterogeneous; this heterogeneity may occur at the syntactic, terminological, conceptual, and semantic levels. To solve this problem, alignments between entities of ontologies must be identified. This process is called ontology matching. In this paper, the authors propose a new method to extract alignment with multiple cardinalities using integer linear programming techniques. The authors conducted a series of experiments and compared them with currently used methods. The obtained results show the efficiency of the proposed method.


Author(s):  
Matthew Perry ◽  
Amit P. Sheth ◽  
Farshad Hakimpour ◽  
Prateek Jain
Keyword(s):  

Author(s):  
Jiaoyan Chen ◽  
Freddy Lecue ◽  
Jeff Z. Pan ◽  
Huajun Chen

Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. In the semantic Web, data is interpreted in ontologies and its ordered sequence is represented as an ontology stream. Our work exploits the semantics of such streams to tackle the problem of concept drift i.e., unexpected changes in data distribution, causing most of models to be less accurate as time passes. To this end we revisited (i) semantic inference in the context of supervised stream learning, and (ii) models with semantic embeddings. The experiments show accurate prediction with data from Dublin and Beijing.


Sign in / Sign up

Export Citation Format

Share Document