scholarly journals An Automatic Approach to Build Geographical Knowledge Base from Geographical Data Sources for Earthquake Emergency Response

Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2016 ◽  
Vol 31 (2) ◽  
pp. 97-123 ◽  
Author(s):  
Alfred Krzywicki ◽  
Wayne Wobcke ◽  
Michael Bain ◽  
John Calvo Martinez ◽  
Paul Compton

AbstractData mining techniques for extracting knowledge from text have been applied extensively to applications including question answering, document summarisation, event extraction and trend monitoring. However, current methods have mainly been tested on small-scale customised data sets for specific purposes. The availability of large volumes of data and high-velocity data streams (such as social media feeds) motivates the need to automatically extract knowledge from such data sources and to generalise existing approaches to more practical applications. Recently, several architectures have been proposed for what we callknowledge mining: integrating data mining for knowledge extraction from unstructured text (possibly making use of a knowledge base), and at the same time, consistently incorporating this new information into the knowledge base. After describing a number of existing knowledge mining systems, we review the state-of-the-art literature on both current text mining methods (emphasising stream mining) and techniques for the construction and maintenance of knowledge bases. In particular, we focus on mining entities and relations from unstructured text data sources, entity disambiguation, entity linking and question answering. We conclude by highlighting general trends in knowledge mining research and identifying problems that require further research to enable more extensive use of knowledge bases.


Author(s):  
Eleana Asimakopoulou ◽  
Chimay J. Anumba ◽  
Bouchlaghem ◽  
Bouchlaghem

Much work is under way within the Grid technology community on issues associated with the development of services to foster collaboration via the integration and exploitation of multiple autonomous, distributed data sources through a seamless and flexible virtualized interface. However, several obstacles arise in the design and implementation of such services. A notable obstacle, namely how clients within a data Grid environment can be kept automatically informed of the latest and relevant changes about data entered/committed in single or multiple autonomous distributed datasets is identified. The view is that keeping interested users informed of relevant changes occurring across their domain of interest will enlarge their decision-making space which in turn will increase the opportunities for a more informed decision to be encountered. With this in mind, the chapter goes on to describe in detail the model architecture and its implementation to keep interested users informed automatically about relevant up-to-date data.


2018 ◽  
Vol 4 (1-2) ◽  
pp. 39-60
Author(s):  
N. İlgi Gerçek

AbstractHittite archives are remarkably rich in geographical data. A diverse array of documents has yielded, aside from thousands of geographical names (of towns, territories, mountains, and rivers), detailed descriptions of the Hittite state’s frontiers and depictions of landscape and topography. Historical geography has, as a result, occupied a central place in Hittitological research since the beginnings of the field. The primary aim of scholarship in this area has been to locate (precisely) or localize (approximately) regions, towns, and other geographical features, matching Hittite geographical names with archaeological sites, unexcavated mounds, and—whenever possible—with geographical names from the classical period. At the same time, comparatively little work has been done on geographical thinking in Hittite Anatolia: how and for what purpose(s) was geographical information collected, organized, and presented? How did those who produce the texts imagine their world and their homeland, “the Land of Hatti?” How did they characterize other lands and peoples they came into contact with? Concentrating on these questions, the present paper aims to extract from Hittite written sources their writers’ geographical conceptions and practices. It is argued that the acquisition and management of geographical information was an essential component of the Hittite Empire’s administrative infrastructure and that geographical knowledge was central to the creation of a Hittite homeland.


2019 ◽  
Vol 238 ◽  
pp. 117965 ◽  
Author(s):  
Tao Wang ◽  
Shaya Guomai ◽  
Limao Zhang ◽  
Guijun Li ◽  
Yulong Li ◽  
...  

2021 ◽  
Author(s):  
Kai Schröter ◽  
Max Steinhausen ◽  
Fabio Brill ◽  
Stefan Lüdtke ◽  
Daniel Eggert ◽  
...  

<p>Globally increasing flood losses due to anthropogenic climate change and growing exposure underline the need for effective emergency response and recovery. Knowing the inundation situation and resulting losses during or shortly after a flood is crucial for decision making in emergency response and recovery. With increasing amounts of data available from a growing number and diversity of sensors and data sources, data science methods offer great opportunities for combining data and extracting knowledge about flood processes in near real-time.</p><p>The main objective of this research is to develop a rapid and reliable flood depth mapping procedure by integrating information from multiple sensors and data sources. The created flood depth maps serve as input for the prediction of flood impacts. This contribution presents outcomes of a demonstration case using the flood of June 2013 in Dresden (Germany) where satellite remote sensing data, water level observations at the gauge Dresden and Volunteered Geographic Information based on social media images providing information about flooding are combined using statistical and machine learning-based data fusion algorithms. A detailed post-event inundation depth map based on terrestrial survey data and aerial images is available as a reference map and is used for evaluation. First results show that the individual datasets have different strengths and weaknesses. The combination of multiple data sources is able to counteract the weaknesses of single datasets and provide a significantly improved flood map and impact assessment. Our work is conducted within the Digital Earth Project (.</p><p> </p>


Sign in / Sign up

Export Citation Format

Share Document