Expressing Data, Space, and Time with Tableau Public™

Author(s):  
Shalin Hai-Jew

Virtually every subject area depicted in a learning object could conceivably involve a space-time element. Theoretically, every event may be mapped geospatially, and in time, these spatialized event maps may be overlaid with combined data (locations of particular natural and human-made objects, demographics, and other phenomena) to enable the identification and analysis of time-space patterns and interrelationships. They enable hypothesis formations, hunches, and the asking and answering of important research questions. The ability to integrate time-space insights into research work is enhanced by the wide availability of multiple new sources of free geospatial data: open data from governments and organizations (as part of Gov 2.0), locative information from social media platforms (as part of Web 2.0), and self-created geospatial datasets from multiple sources. The resulting maps and data visualizations, imbued with a time context and the potential sequencing of maps over time, enable fresh insights and increased understandings. In addition to the wide availability of validated geospatial data, Tableau Public is a free and open cloud-based tool that enables the mapping of various data sets for visualizations that are pushed out onto a public gallery for public consumption. The interactive dashboard enables users to explore the data and discover insights and patterns. Tableau Public is a tool that enables enhanced visual- and interaction-based knowing, through interactive Web-friendly maps, panel charts, and data dashboards. With virtually zero computational or hosting costs (for the user), Tableau Public enables the integration of geospatial mapping and analysis stands to benefit research work, data exploration and discovery and analysis, and learning.

Big Data ◽  
2016 ◽  
pp. 941-969
Author(s):  
Shalin Hai-Jew

Virtually every subject area depicted in a learning object could conceivably involve a space-time element. Theoretically, every event may be mapped geospatially, and in time, these spatialized event maps may be overlaid with combined data (locations of particular natural and human-made objects, demographics, and other phenomena) to enable the identification and analysis of time-space patterns and interrelationships. They enable hypothesis formations, hunches, and the asking and answering of important research questions. The ability to integrate time-space insights into research work is enhanced by the wide availability of multiple new sources of free geospatial data: open data from governments and organizations (as part of Gov 2.0), locative information from social media platforms (as part of Web 2.0), and self-created geospatial datasets from multiple sources. The resulting maps and data visualizations, imbued with a time context and the potential sequencing of maps over time, enable fresh insights and increased understandings. In addition to the wide availability of validated geospatial data, Tableau Public is a free and open cloud-based tool that enables the mapping of various data sets for visualizations that are pushed out onto a public gallery for public consumption. The interactive dashboard enables users to explore the data and discover insights and patterns. Tableau Public is a tool that enables enhanced visual- and interaction-based knowing, through interactive Web-friendly maps, panel charts, and data dashboards. With virtually zero computational or hosting costs (for the user), Tableau Public enables the integration of geospatial mapping and analysis stands to benefit research work, data exploration and discovery and analysis, and learning.


2016 ◽  
pp. 1018-1044
Author(s):  
Shalin Hai-Jew

Virtually every subject area depicted in a learning object could conceivably involve a space-time element. Theoretically, every event may be mapped geospatially, and in time, these spatialized event maps may be overlaid with combined data (locations of particular natural and human-made objects, demographics, and other phenomena) to enable the identification and analysis of time-space patterns and interrelationships. They enable hypothesis formations, hunches, and the asking and answering of important research questions. The ability to integrate time-space insights into research work is enhanced by the wide availability of multiple new sources of free geospatial data: open data from governments and organizations (as part of Gov 2.0), locative information from social media platforms (as part of Web 2.0), and self-created geospatial datasets from multiple sources. The resulting maps and data visualizations, imbued with a time context and the potential sequencing of maps over time, enable fresh insights and increased understandings. In addition to the wide availability of validated geospatial data, Tableau Public is a free and open cloud-based tool that enables the mapping of various data sets for visualizations that are pushed out onto a public gallery for public consumption. The interactive dashboard enables users to explore the data and discover insights and patterns. Tableau Public is a tool that enables enhanced visual- and interaction-based knowing, through interactive Web-friendly maps, panel charts, and data dashboards. With virtually zero computational or hosting costs (for the user), Tableau Public enables the integration of geospatial mapping and analysis stands to benefit research work, data exploration and discovery and analysis, and learning.


Author(s):  
C. Arias Munoz ◽  
M. A. Brovelli ◽  
S. Corti ◽  
G. Zamboni

The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed) made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.


Author(s):  
C. Arias Munoz ◽  
M. A. Brovelli ◽  
S. Corti ◽  
G. Zamboni

The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed) made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5204
Author(s):  
Anastasija Nikiforova

Nowadays, governments launch open government data (OGD) portals that provide data that can be accessed and used by everyone for their own needs. Although the potential economic value of open (government) data is assessed in millions and billions, not all open data are reused. Moreover, the open (government) data initiative as well as users’ intent for open (government) data are changing continuously and today, in line with IoT and smart city trends, real-time data and sensor-generated data have higher interest for users. These “smarter” open (government) data are also considered to be one of the crucial drivers for the sustainable economy, and might have an impact on information and communication technology (ICT) innovation and become a creativity bridge in developing a new ecosystem in Industry 4.0 and Society 5.0. The paper inspects OGD portals of 60 countries in order to understand the correspondence of their content to the Society 5.0 expectations. The paper provides a report on how much countries provide these data, focusing on some open (government) data success facilitating factors for both the portal in general and data sets of interest in particular. The presence of “smarter” data, their level of accessibility, availability, currency and timeliness, as well as support for users, are analyzed. The list of most competitive countries by data category are provided. This makes it possible to understand which OGD portals react to users’ needs, Industry 4.0 and Society 5.0 request the opening and updating of data for their further potential reuse, which is essential in the digital data-driven world.


2021 ◽  
Vol 10 (4) ◽  
pp. 251
Author(s):  
Christina Ludwig ◽  
Robert Hecht ◽  
Sven Lautenbach ◽  
Martin Schorcht ◽  
Alexander Zipf

Public urban green spaces are important for the urban quality of life. Still, comprehensive open data sets on urban green spaces are not available for most cities. As open and globally available data sets, the potential of Sentinel-2 satellite imagery and OpenStreetMap (OSM) data for urban green space mapping is high but limited due to their respective uncertainties. Sentinel-2 imagery cannot distinguish public from private green spaces and its spatial resolution of 10 m fails to capture fine-grained urban structures, while in OSM green spaces are not mapped consistently and with the same level of completeness everywhere. To address these limitations, we propose to fuse these data sets under explicit consideration of their uncertainties. The Sentinel-2 derived Normalized Difference Vegetation Index was fused with OSM data using the Dempster–Shafer theory to enhance the detection of small vegetated areas. The distinction between public and private green spaces was achieved using a Bayesian hierarchical model and OSM data. The analysis was performed based on land use parcels derived from OSM data and tested for the city of Dresden, Germany. The overall accuracy of the final map of public urban green spaces was 95% and was mainly influenced by the uncertainty of the public accessibility model.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 621
Author(s):  
Giuseppe Psaila ◽  
Paolo Fosci

Internet technology and mobile technology have enabled producing and diffusing massive data sets concerning almost every aspect of day-by-day life. Remarkable examples are social media and apps for volunteered information production, as well as Open Data portals on which public administrations publish authoritative and (often) geo-referenced data sets. In this context, JSON has become the most popular standard for representing and exchanging possibly geo-referenced data sets over the Internet.Analysts, wishing to manage, integrate and cross-analyze such data sets, need a framework that allows them to access possibly remote storage systems for JSON data sets, to retrieve and query data sets by means of a unique query language (independent of the specific storage technology), by exploiting possibly-remote computational resources (such as cloud servers), comfortably working on their PC in their office, more or less unaware of real location of resources. In this paper, we present the current state of the J-CO Framework, a platform-independent and analyst-oriented software framework to manipulate and cross-analyze possibly geo-tagged JSON data sets. The paper presents the general approach behind the J-CO Framework, by illustrating the query language by means of a simple, yet non-trivial, example of geographical cross-analysis. The paper also presents the novel features introduced by the re-engineered version of the execution engine and the most recent components, i.e., the storage service for large single JSON documents and the user interface that allows analysts to comfortably share data sets and computational resources with other analysts possibly working in different places of the Earth globe. Finally, the paper reports the results of an experimental campaign, which show that the execution engine actually performs in a more than satisfactory way, proving that our framework can be actually used by analysts to process JSON data sets.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


2021 ◽  
pp. 000276422110216
Author(s):  
Jasmine Lorenzini ◽  
Hanspeter Kriesi ◽  
Peter Makarov ◽  
Bruno Wüest

Protest event analysis is a key method to study social movements, allowing to systematically analyze protest events over time and space. However, the manual coding of protest events is time-consuming and resource intensive. Recently, advances in automated approaches offer opportunities to code multiple sources and create large data sets that span many countries and years. However, too often the procedures used are not discussed in details and, therefore, researchers have a limited capacity to assess the validity and reliability of the data. In addition, many researchers highlighted biases associated with the study of protest events that are reported in the news. In this study, we ask how social scientists can build on electronic news databases and computational tools to create reliable PEA data that cover a large number of countries over a long period of time. We provide a detailed description our semiautomated approach and we offer an extensive discussion of potential biases associated with the study of protest events identified in international news sources.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Gangadhar Ch ◽  
S. Jana ◽  
Sankararao Majji ◽  
Prathyusha Kuncha ◽  
Fantin Irudaya Raj E. ◽  
...  

Purpose For the first time in a decade, a new form of pneumonia virus, coronavirus, COVID-19, appeared in Wuhan, China. To date, it has affected millions of people, killed thousands and resulted in thousands of deaths around the world. To stop the spread of this virus, isolate the infected people. Computed tomography (CT) imaging is very accurate in revealing the details of the lungs and allows oncologists to detect COVID. However, the analysis of CT scans, which can include hundreds of images, may cause delays in hospitals. The use of artificial intelligence (AI) in radiology could help to COVID-19-positive cancer in this manner is the main purpose of the work. Design/methodology/approach CT scans are a medical imaging procedure that gives a three-dimensional (3D) representation of the lungs for clinical purposes. The volumetric 3D data sets can be regarded as axial, coronal and transverse data sets. By using AI, we can diagnose the virus presence. Findings The paper discusses the use of an AI for COVID-19, and CT classification issue and vaccination details of COVID-19 have been detailed in this paper. Originality/value Originality of the work is, all the data can be collected genuinely and did research work doneown methodology.


Sign in / Sign up

Export Citation Format

Share Document