scholarly journals Plataformas de datos abiertos: disponibilidad de mercado

2016 ◽  
Vol 3 (3) ◽  
pp. 137-142
Author(s):  
Viena Muirragui Irrazábal ◽  
Fernando Pacheco Olea ◽  
Edwin León Plúas ◽  
Fabricio Guevara Viejó

El documento presenta las oportunidades que dentro del contexto de disponibilidad de mercado existen para el uso de las plataformas open data, ya que no todas las organizaciones tienen recursos propios que sean suficientes para implantar sus plataformas desde una base inicial, siendo por lo tanto necesario requerir aquellas que han sido desarrolladas por empresas expertas en ello; si se considera tomar ésta opción tendrían a resolver el proceso, sin embargo podría constituirse en una pérdida de capacidad de llegar a personalizar y adecuar integralmente la plataforma a las necesidades especificas de la organización que las requiera. Tomando en consideración lo expuesto, en este artículo de revisión se detallan características con las que cuentan  algunas de las plataformas consideradas principales por su aplicabilidad que están disponibles en el mercado, sea a través de un modelo de software gratuito o libre y, también,  con licencia comercial, en otros casos. Los datos que han sido considerados para este análisis se los referencia con respecto a la información que los propios proveedores han proporcionado por medio de documentación que se encuentra publicada por ellos  en la web, como también se ha considerado la experiencia con versiones trial y ejemplos utilizables de manera abierta. Abstract The paper presents the opportunities within the context of market availability exist for the use of open data platforms, since not all organizations have their own resources which are sufficient to implement their platforms from an initial base, being therefore necessary to require those that have been developed by expert companies in it; considering taking this option would solve the process, however it could become a loss of ability to reach fully customize and adapt the platform to the specific needs of the organization that requires them. Considering the above, in this review article features are detailed with that feature some of the platforms major consideration for its applicability that are available in the market, either through a model of free and open source software and also licensed trade in other cases. The data that have been considered for this analysis is the reference regarding the information that the suppliers have provided through documentation that is published by them on the web, as has also been considered the experience with trial versions and examples used openly.

Author(s):  
Shinji Kobayashi ◽  
Luis Falcón ◽  
Hamish Fraser ◽  
Jørn Braa ◽  
Pamod Amarakoon ◽  
...  

Objectives: The emerging COVID-19 pandemic has caused one of the world’s worst health disasters compounded by social confusion with misinformation, the so-called “Infodemic”. In this paper, we discuss how open technology approaches - including data sharing, visualization, and tooling - can address the COVID-19 pandemic and infodemic. Methods: In response to the call for participation in the 2020 International Medical Informatics Association (IMIA) Yearbook theme issue on Medical Informatics and the Pandemic, the IMIA Open Source Working Group surveyed recent works related to the use of Free/Libre/Open Source Software (FLOSS) for this pandemic. Results: FLOSS health care projects including GNU Health, OpenMRS, DHIS2, and others, have responded from the early phase of this pandemic. Data related to COVID-19 have been published from health organizations all over the world. Civic Technology, and the collaborative work of FLOSS and open data groups were considered to support collective intelligence on approaches to managing the pandemic. Conclusion: FLOSS and open data have been effectively used to contribute to managing the COVID-19 pandemic, and open approaches to collaboration can improve trust in data.


2016 ◽  
Author(s):  
Jean-Michel Follin ◽  
Maïté Fahrasmane ◽  
Élisabeth Simonetto

More and more historical data are available on the web. In France, old cadastral maps are regularly published by the “départements”. Such material is relevant to various applications (on-the-field search of specific objects such as old boundary stakes, historical studies of demography, human activities, land cover…). The GeF laboratory is working on the development of a complete methodological toolchain to vectorise, correct and analyse cadastral parcels and their evolution, using open source software and programming language only (QGIS, GDAL, Python). This article details the use of a part of this toolchain - georeferencing old cadastral data - on parcels located near the Loir river, in two villages of southern Sarthe: Vaas and Aubigné-Racan. After a presentation of our methodological toolchain, we will discuss our first results.


2016 ◽  
Author(s):  
Jean-Michel Follin ◽  
Maïté Fahrasmane ◽  
Élisabeth Simonetto

More and more historical data are available on the web. In France, old cadastral maps are regularly published by the “départements”. Such material is relevant to various applications (on-the-field search of specific objects such as old boundary stakes, historical studies of demography, human activities, land cover…). The GeF laboratory is working on the development of a complete methodological toolchain to vectorise, correct and analyse cadastral parcels and their evolution, using open source software and programming language only (QGIS, GDAL, Python). This article details the use of a part of this toolchain - georeferencing old cadastral data - on parcels located near the Loir river, in two villages of southern Sarthe: Vaas and Aubigné-Racan. After a presentation of our methodological toolchain, we will discuss our first results.


2016 ◽  
Author(s):  
Jean-Michel Follin ◽  
Maïté Fahrasmane ◽  
Élisabeth Simonetto

More and more historical data are available on the web. In France, old cadastral maps are regularly published by the “départements”. Such material is relevant to various applications (on-the-field search of specific objects such as old boundary stakes, historical studies of demography, human activities, land cover…). The GeF laboratory is working on the development of a complete methodological toolchain to vectorise, correct and analyse cadastral parcels and their evolution, using open source software and programming language only (QGIS, GDAL, Python). This article details the use of a part of this toolchain - georeferencing old cadastral data - on parcels located near the Loir river, in two villages of southern Sarthe: Vaas and Aubigné-Racan. After a presentation of our methodological toolchain, we will discuss our first results.


Author(s):  
Tobias Haug ◽  
Sarah Ebling

This study reports on the use of an open-source software for sign language learning and (self-)assessment. A Yes/No vocabulary size test for Swiss German Sign Language (Deutschschweizerische Gebärdensprache, DSGS) was developed, targeting beginning adult learners. The Web-based test, which can be used for self-assessment or placement purposes, was administered to 20 DSGS adult learners of ages 24 to 55 (M = 39.3). The learners filled out a background questionnaire, took the Yes/No test tests, and filled out a feedback questionnaire. The comments provided by the learners about the suitability of the Web-based DSGS vocabulary self-assessment instrument provided concrete feedback towards improvement of the system.


Author(s):  
Ricardo Oliveira ◽  
Rafael Moreno

Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.


Author(s):  
Ricardo Oliveira ◽  
Rafael Moreno

Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.


2018 ◽  
Author(s):  
Tomislav Hengl ◽  
Ichsani Wheeler ◽  
Robert A MacMillan

Using the term "Open data" has become a bit of a fashion, but using it without clear specifications is misleading i.e. it can be considered just an empty phrase. Probably even worse is the term "Open Science" — can science be NOT open at all? Are we reinventing something that should be obvious from start? This guide tries to clarify some key aspects of Open Data, Open Source Software and Crowdsourcing using examples of projects and business. It aims at helping you understand and appreciate complexity of Open Data, Open Source software and Open Access publications. It was specifically written for producers and users of environmental data, however, the guide will likely be useful to any data producers and user.


Sign in / Sign up

Export Citation Format

Share Document