scholarly journals Focused Web Crawlers on Domain-Specific Retrieval Systems

2021 ◽  
Vol 1125 (1) ◽  
pp. 012045
Author(s):  
Ika Oktavia Suzanti ◽  
Fakhrur Razi ◽  
Husni ◽  
Eka Mala Sari Rochman ◽  
Nurhayati Fitriani
2017 ◽  
Author(s):  
Marilena Oita ◽  
Antoine Amarilli ◽  
Pierre Senellart

Deep Web databases, whose content is presented as dynamically-generated Web pages hidden behind forms, have mostly been left unindexed by search engine crawlers. In order to automatically explore this mass of information, many current techniques assume the existence of domain knowledge, which is costly to create and maintain. In this article, we present a new perspective on form understanding and deep Web data acquisition that does not require any domain-specific knowledge. Unlike previous approaches, we do not perform the various steps in the process (e.g., form understanding, record identification, attribute labeling) independently but integrate them to achieve a more complete understanding of deep Web sources. Through information extraction techniques and using the form itself for validation, we reconcile input and output schemas in a labeled graph which is further aligned with a generic ontology. The impact of this alignment is threefold: first, the resulting semantic infrastructure associated with the form can assist Web crawlers when probing the form for content indexing; second, attributes of response pages are labeled by matching known ontology instances, and relations between attributes are uncovered; and third, we enrich the generic ontology with facts from the deep Web.


Patents are critical intellectual assets for any competitive business. With ever increasing patent filings, effective patent prior art search has become an inevitably important task in patent retrieval which is a subfield of information retrieval (IR). The goal of the prior art search is to find and rank documents related to a query patent. Query formulation is a key step in prior art search in which patent structure is exploited to generate queries using various fields available in patent text. As patent encodes multiple technical domains, this work argues that technical domains and patent structure have their combined effect on the effectiveness of patent retrieval. The study uses international patent classification codes (IPC) to categorize query patents in eight technical domains and also explores eighteen different combination of patent fields to generate search queries. A total of 144 extensive retrieval experiments have been carried out using BM25 ranking algorithm. Retrieval performance is evaluated in terms of recall score of top 1000 records. Empirical results support our assumption. A two-way analysis of variance is also conducted to validate the hypotheses. The findings of this work may be helpful for patent information retrieval professionals to develop domain specific patent retrieval systems exploiting the patent structure.


Author(s):  
Pankaj Dadure ◽  
Partha Pakray ◽  
Sivaji Bandyopadhyay

Mathematical formulas are widely used to express ideas and fundamental principles of science, technology, engineering, and mathematics. The rapidly growing research in science and engineering leads to a generation of a huge number of scientific documents which contain both textual as well as mathematical terms. In a scientific document, the sense of mathematical formulae is conveyed through the context and the symbolic structure which follows the strong domain specific conventions. In contrast to textual information, developed mathematical information retrieval systems have demonstrated the unique and elite indexing and matching approaches which are beneficial to the retrieval of formulae and scientific term. This chapter discusses the recent advancement in formula-based search engines, various formula representation styles and indexing techniques, benefits of formula-based search engines in various future applications like plagiarism detection, math recommendation system, etc.


2018 ◽  
Vol 36 (3) ◽  
pp. 430-444
Author(s):  
Sholeh Arastoopoor

Purpose The degree to which a text is considered readable depends on the capability of the reader. This assumption puts different information retrieval systems at the risk of retrieving unreadable or hard-to-be-read yet relevant documents for their users. This paper aims to examine the potential use of concept-based readability measures along with classic measures for re-ranking search results in information retrieval systems, specifically in the Persian language. Design/methodology/approach Flesch–Dayani as a classic readability measure along with document scope (DS) and document cohesion (DC) as domain-specific measures have been applied for scoring the retrieved documents from Google (181 documents) and the RICeST database (215 documents) in the field of computer science and information technology (IT). The re-ranked result has been compared with the ranking of potential users regarding their readability. Findings The results show that there is a difference among subcategories of the computer science and IT field according to their readability and understandability. This study also shows that it is possible to develop a hybrid score based on DS and DC measures and, among all four applied scores in re-ranking the documents, the re-ranked list of documents based on the DSDC score shows correlation with re-ranking of the participants in both groups. Practical implications The findings of this study would foster a new option in re-ranking search results based on their difficulty for experts and non-experts in different fields. Originality/value The findings and the two-mode re-ranking model proposed in this paper along with its primary focus on domain-specific readability in the Persian language would help Web search engines and online databases in further refining the search results in pursuit of retrieving useful texts for users with differing expertise.


Author(s):  
Vladimir A. Kulyukin ◽  
John A. Nicholson

The advent of the World Wide Web has resulted in the creation of millions of documents containing unstructured, structured and semi-structured data. Consequently, research on structural text mining has come to the forefront of both information retrieval and natural language processing (Cardie, 1997; Freitag, 1998; Hammer, Garcia-Molina, Cho, Aranha, & Crespo, 1997; Hearst, 1992; Hsu & Chang, 1999; Jacquemin & Bush, 2000; Kushmerick, Weld, & Doorenbos, 1997). Knowledge of how information is organized and structured in texts can be of significant assistance to information systems that use documents as their knowledge bases (Appelt, 1999). In particular, such knowledge is of use to information retrieval systems (Salton & McGill, 1983) that retrieve documents in response to user queries and to systems that use texts to construct domain-specific ontologies or thesauri (Ruge, 1997).


Author(s):  
Evelyn R. Ackerman ◽  
Gary D. Burnett

Advancements in state of the art high density Head/Disk retrieval systems has increased the demand for sophisticated failure analysis methods. From 1968 to 1974 the emphasis was on the number of tracks per inch. (TPI) ranging from 100 to 400 as summarized in Table 1. This emphasis shifted with the increase in densities to include the number of bits per inch (BPI). A bit is formed by magnetizing the Fe203 particles of the media in one direction and allowing magnetic heads to recognize specific data patterns. From 1977 to 1986 the tracks per inch increased from 470 to 1400 corresponding to an increase from 6300 to 10,800 bits per inch respectively. Due to the reduction in the bit and track sizes, build and operating environments of systems have become critical factors in media reliability.Using the Ferrofluid pattern developing technique, the scanning electron microscope can be a valuable diagnostic tool in the examination of failure sites on disks.


2008 ◽  
Vol 67 (2) ◽  
pp. 71-83 ◽  
Author(s):  
Yolanda A. Métrailler ◽  
Ester Reijnen ◽  
Cornelia Kneser ◽  
Klaus Opwis

This study compared individuals with pairs in a scientific problem-solving task. Participants interacted with a virtual psychological laboratory called Virtue to reason about a visual search theory. To this end, they created hypotheses, designed experiments, and analyzed and interpreted the results of their experiments in order to discover which of five possible factors affected the visual search process. Before and after their interaction with Virtue, participants took a test measuring theoretical and methodological knowledge. In addition, process data reflecting participants’ experimental activities and verbal data were collected. The results showed a significant but equal increase in knowledge for both groups. We found differences between individuals and pairs in the evaluation of hypotheses in the process data, and in descriptive and explanatory statements in the verbal data. Interacting with Virtue helped all students improve their domain-specific and domain-general psychological knowledge.


Sign in / Sign up

Export Citation Format

Share Document