scholarly journals Weaving Words For Textile Museums: The Development of The Linked SILKNOW Thesaurus

Author(s):  
Ester Alba ◽  
Mar Gaitán ◽  
Arabella León ◽  
Dunia Mladenic ◽  
Janez Branek

Abstract The cultural heritage domain in general and silk textiles, in particular, are characterized by large, rich and heterogeneous data sets. Silk heritage vocabulary comes from multiple sources that have been mixed up across time and space. This has led to the use of different terminology in specialized organizations in order to describe their artefacts. This makes data interoperability between independent catalogues very difficult. To address these issues, SILKNOW created a multilingual thesaurus related to silk textiles. It was carried out by experts in textile terminology and art historians and computationally implemented by experts in text mining, multi-/cross-linguality and semantic extraction from text. This paper presents the rationale behind the realization of this thesaurus.

2020 ◽  
Vol 10 (1) ◽  
pp. 7
Author(s):  
Miguel R. Luaces ◽  
Jesús A. Fisteus ◽  
Luis Sánchez-Fernández ◽  
Mario Munoz-Organero ◽  
Jesús Balado ◽  
...  

Providing citizens with the ability to move around in an accessible way is a requirement for all cities today. However, modeling city infrastructures so that accessible routes can be computed is a challenge because it involves collecting information from multiple, large-scale and heterogeneous data sources. In this paper, we propose and validate the architecture of an information system that creates an accessibility data model for cities by ingesting data from different types of sources and provides an application that can be used by people with different abilities to compute accessible routes. The article describes the processes that allow building a network of pedestrian infrastructures from the OpenStreetMap information (i.e., sidewalks and pedestrian crossings), improving the network with information extracted obtained from mobile-sensed LiDAR data (i.e., ramps, steps, and pedestrian crossings), detecting obstacles using volunteered information collected from the hardware sensors of the mobile devices of the citizens (i.e., ramps and steps), and detecting accessibility problems with software sensors in social networks (i.e., Twitter). The information system is validated through its application in a case study in the city of Vigo (Spain).


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


2021 ◽  
pp. 000276422110216
Author(s):  
Jasmine Lorenzini ◽  
Hanspeter Kriesi ◽  
Peter Makarov ◽  
Bruno Wüest

Protest event analysis is a key method to study social movements, allowing to systematically analyze protest events over time and space. However, the manual coding of protest events is time-consuming and resource intensive. Recently, advances in automated approaches offer opportunities to code multiple sources and create large data sets that span many countries and years. However, too often the procedures used are not discussed in details and, therefore, researchers have a limited capacity to assess the validity and reliability of the data. In addition, many researchers highlighted biases associated with the study of protest events that are reported in the news. In this study, we ask how social scientists can build on electronic news databases and computational tools to create reliable PEA data that cover a large number of countries over a long period of time. We provide a detailed description our semiautomated approach and we offer an extensive discussion of potential biases associated with the study of protest events identified in international news sources.


2020 ◽  
Vol 221 (3) ◽  
pp. 1542-1554 ◽  
Author(s):  
B C Root

SUMMARY Current seismic tomography models show a complex environment underneath the crust, corroborated by high-precision satellite gravity observations. Both data sets are used to independently explore the density structure of the upper mantle. However, combining these two data sets proves to be challenging. The gravity-data has an inherent insensitivity in the radial direction and seismic tomography has a heterogeneous data acquisition, resulting in smoothed tomography models with de-correlation between different models for the mid-to-small wavelength features. Therefore, this study aims to assess and quantify the effect of regularization on a seismic tomography model by exploiting the high lateral sensitivity of gravity data. Seismic tomography models, SL2013sv, SAVANI, SMEAN2 and S40RTS are compared to a gravity-based density model of the upper mantle. In order to obtain similar density solutions compared to the seismic-derived models, the gravity-based model needs to be smoothed with a Gaussian filter. Different smoothening characteristics are observed for the variety of seismic tomography models, relating to the regularization approach in the inversions. Various S40RTS models with similar seismic data but different regularization settings show that the smoothening effect is stronger with increasing regularization. The type of regularization has a dominant effect on the final tomography solution. To reduce the effect of regularization on the tomography models, an enhancement procedure is proposed. This enhancement should be performed within the spectral domain of the actual resolution of the seismic tomography model. The enhanced seismic tomography models show improved spatial correlation with each other and with the gravity-based model. The variation of the density anomalies have similar peak-to-peak magnitudes and clear correlation to geological structures. The resolvement of the spectral misalignment between tomographic models and gravity-based solutions is the first step in the improvement of multidata inversion studies of the upper mantle and benefit from the advantages in both data sets.


AI Magazine ◽  
2015 ◽  
Vol 36 (1) ◽  
pp. 75-86 ◽  
Author(s):  
Jennifer Sleeman ◽  
Tim Finin ◽  
Anupam Joshi

We describe an approach for identifying fine-grained entity types in heterogeneous data graphs that is effective for unstructured data or when the underlying ontologies or semantic schemas are unknown. Identifying fine-grained entity types, rather than a few high-level types, supports coreference resolution in heterogeneous graphs by reducing the number of possible coreference relations that must be considered. Big data problems that involve integrating data from multiple sources can benefit from our approach when the datas ontologies are unknown, inaccessible or semantically trivial. For such cases, we use supervised machine learning to map entity attributes and relations to a known set of attributes and relations from appropriate background knowledge bases to predict instance entity types. We evaluated this approach in experiments on data from DBpedia, Freebase, and Arnetminer using DBpedia as the background knowledge base.


Sign in / Sign up

Export Citation Format

Share Document