Approach to generate 3D‐printed terrain models using free software and open data sources: Application to military planning

2020 ◽  
Vol 28 (3) ◽  
pp. 477-489
Author(s):  
Mercedes Solla ◽  
Carlos Casqueiro ◽  
Ignacio Cuvillo
Epidemiologia ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 315-324
Author(s):  
Juan M. Banda ◽  
Ramya Tekumalla ◽  
Guanyu Wang ◽  
Jingyuan Yu ◽  
Tuo Liu ◽  
...  

As the COVID-19 pandemic continues to spread worldwide, an unprecedented amount of open data is being generated for medical, genetics, and epidemiological research. The unparalleled rate at which many research groups around the world are releasing data and publications on the ongoing pandemic is allowing other scientists to learn from local experiences and data generated on the front lines of the COVID-19 pandemic. However, there is a need to integrate additional data sources that map and measure the role of social dynamics of such a unique worldwide event in biomedical, biological, and epidemiological analyses. For this purpose, we present a large-scale curated dataset of over 1.12 billion tweets, growing daily, related to COVID-19 chatter generated from 1 January 2020 to 27 June 2021 at the time of writing. This data source provides a freely available additional data source for researchers worldwide to conduct a wide and diverse number of research projects, such as epidemiological analyses, emotional and mental responses to social distancing measures, the identification of sources of misinformation, stratified measurement of sentiment towards the pandemic in near real time, among many others.


2018 ◽  
Vol 7 (9) ◽  
pp. 342 ◽  
Author(s):  
Adam Salach ◽  
Krzysztof Bakuła ◽  
Magdalena Pilarska ◽  
Wojciech Ostrowski ◽  
Konrad Górski ◽  
...  

In this paper, the results of an experiment about the vertical accuracy of generated digital terrain models were assessed. The created models were based on two techniques: LiDAR and photogrammetry. The data were acquired using an ultralight laser scanner, which was dedicated to Unmanned Aerial Vehicle (UAV) platforms that provide very dense point clouds (180 points per square meter), and an RGB digital camera that collects data at very high resolution (a ground sampling distance of 2 cm). The vertical error of the digital terrain models (DTMs) was evaluated based on the surveying data measured in the field and compared to airborne laser scanning collected with a manned plane. The data were acquired in summer during a corridor flight mission over levees and their surroundings, where various types of land cover were observed. The experiment results showed unequivocally, that the terrain models obtained using LiDAR technology were more accurate. An attempt to assess the accuracy and possibilities of penetration of the point cloud from the image-based approach, whilst referring to various types of land cover, was conducted based on Real Time Kinematic Global Navigation Satellite System (GNSS-RTK) measurements and was compared to archival airborne laser scanning data. The vertical accuracy of DTM was evaluated for uncovered and vegetation areas separately, providing information about the influence of the vegetation height on the results of the bare ground extraction and DTM generation. In uncovered and low vegetation areas (0–20 cm), the vertical accuracies of digital terrain models generated from different data sources were quite similar: for the UAV Laser Scanning (ULS) data, the RMSE was 0.11 m, and for the image-based data collected using the UAV platform, it was 0.14 m, whereas for medium vegetation (higher than 60 cm), the RMSE from these two data sources were 0.11 m and 0.36 m, respectively. A decrease in the accuracy of 0.10 m, for every 20 cm of vegetation height, was observed for photogrammetric data; and such a dependency was not noticed in the case of models created from the ULS data.


Author(s):  
Francesco Corcoglioniti ◽  
Marco Rospocher ◽  
Roldano Cattoni ◽  
Bernardo Magnini ◽  
Luciano Serafini

This chapter describes the KnowledgeStore, a scalable, fault-tolerant, and Semantic Web grounded open-source storage system to jointly store, manage, retrieve, and query interlinked structured and unstructured data, especially designed to manage all the data involved in Knowledge Extraction applications. The chapter presents the concept, design, function and implementation of the KnowledgeStore, and reports on its concrete usage in four application scenarios within the NewsReader EU project, where it has been successfully used to store and support the querying of millions of news articles interlinked with billions of RDF triples, both extracted from text and imported from Linked Open Data sources.


2018 ◽  
Vol 186 ◽  
pp. 12013 ◽  
Author(s):  
Luisa Schiavone ◽  
Federico Morando ◽  

The CoBiS is a network formed by 65 libraries. The project is a pilot for Piedmont that is aiming to provide the Committee with an infrastructure for LOD publishing, thus creating a triplification pipeline designed to be easy to automate and replicate. This is being realized with open source technologies, such as the RML mapping language or the JARQL tool that uses Linked Data to describe the conversion of XML, JSON or tabular data into RDF. The first challenge consisted in making possible the dialog of heterogeneous data sources, coming from four different library software (Clavis, Erasmo, SBNWeb and BIBLIOWin 5.0web) and different types of data (bibliographic, multimedia, and archival). The information contained in the catalogs is progressively interlinked with external data sources, such as Wikidata, VIAF, LoC and BNF authority files, Wikipedia and the Dizionario Biografico degli Italiani. Partners of the CoBiS LOD Project are: National Institute for Astrophysics (INAF), Turin Academy of Sciences, Olivetti Historical Archives Association, Alpine Club National Library, Deputazione Subalpina di Storia Patria, National Institute for Metrological Research (INRIM). The technical realization of the project is entrusted to Synapta, and it is partially sponsored by Piedmont Region.


Author(s):  
Ronald P. Reck ◽  
Kenneth B. Sall ◽  
Wendy A. Swanbeck

As music is a topic of interest to many, it is no surprise that developers have applied web and semantic technology to provide various RDF datasets for describing relationships among musical artists, albums, songs, genres, and more. As avid fans of blues and rock music, we wondered if we could construct SPARQL queries to examine properties and relationships between performers in order to answer global questions such as "Who has had the greatest impact on rock music?" Our primary focus was Eric Clapton, a musical artist with a decades-spanning career who has enjoyed both a very successful solo career as well as having performed in several world-renowned bands. The application of semantic technology to a public dataset can provide useful insights into how similar approaches can be applied to realistic domain problems, such as finding relationships between persons of interest. Clearly understood semantics of available RDF properties in the dataset is of course crucial but is a substantial challenge especially when leveraging information from similar yet different data sources. This paper explores the use of DBpedia and MusicBrainz data sources using OpenLink Virtuoso Universal Server with a Drupal frontend. Much attention is given to the challenges we encountered, especially with respect to relatively large datasets of community-entered open data sources of varying quality and the strategies we employed or recommend to overcome the challenges.


Sign in / Sign up

Export Citation Format

Share Document