scholarly journals Open Standards used in Oceanography Research Spatial Data Repositories in Spain

2020 ◽  
Vol 40 (05) ◽  
pp. 306-312
Author(s):  
Enrique Wulff

Spatial data repositories specialised in geographic information systems (GIS) are an extension of map libraries and archives where much of the increase in use and citation for its geo-spatial data comes from making them available through Open Geospatial Consortium (OGC) web services. In this paper, resource needs and teaching perspectives of these open standards will be described and explained as they have become a domain of application in spatial data repositories and, marine data literacy. The research is based on checking ocean research data rules and contexts in Spain within the European obligations as defined by the EU directive INSPIRE. On that premise, Spanish spatial data infrastructures (SDI) are shown integrating OGC Web Services (OWS) with repositories for ocean observation data, a typical kind of big data. The study revealed that the broad European support to the big data open standards (OGC) implementation in the oceanographic community, is conducted with a model suitable for library management systems. However, Spanish participation in European ocean data spaces is limited and a likely explanation is that this question has not been discussed for about a decade. These findings strengthen the links between spatial data repositories and OGC standards, to identify requirements for interoperability work.

2019 ◽  
Vol 17 (1/2) ◽  
pp. 169-175 ◽  
Author(s):  
Justin Joseph Grandinetti

The 2017 partnership between the National Football League (NFL) and Amazon Web Services (AWS) promises novel forms of cutting-edge real-time statistical analysis through the use of both radio frequency identification (RFID) chips and Amazon’s cloud-based machine learning and data-analytics tools. This use of RFID is heralded for its possibilities: for broadcasters, who are now capable of providing more thorough analysis; for fans, who can experience the game on a deeper analytical level using the NFL’s Next Gen Stats; and for coaches, who can capitalize on data-driven pattern recognition to gain a statistical edge over their competitors in real-time. In this paper, we respond to calls for further examination of the discursive positionings of RFID and big data technologies (Frith 2015; Kitchin and Dodge 2011). Specifically, this synthesis of RFID and cloud computing infrastructure via corporate partnership provides an alternative discursive positioning of two technologies that are often part of asymmetrical relations of power (Andrejevic 2014). Consequently, it is critical to examine the efforts of Amazon and the NFL to normalize pervasive spatial data collection and analytics to a mass audience by presenting these surveillance technologies as helpful tools for accessing new forms of data-driven knowing and analysis.


Author(s):  
Pankaj Dadheech ◽  
Dinesh Goyal ◽  
Sumit Srivastava ◽  
Ankit Kumar

Spatial queries frequently used in Hadoop for significant data process. However, vast and massive size of spatial information makes it difficult to process the spatial inquiries proficiently, so they utilized the Hadoop system for process Big Data. We have used Boolean Queries & Geometry Boolean Spatial Data for Query Optimization using Hadoop System. In this paper, we show a lightweight and adaptable spatial data index for big data which will process in Hadoop frameworks. Results demonstrate the proficiency and adequacy of our spatial ordering system for various spatial inquiries.


The term “Big data” refers to “the high volume of data sets that are relatively complex in nature and having challenges in processing and analyzing the data using conventional database management tools”. In the digital universe, the data volume and variety that, we deal today have grown-up massively from different sources such as Business Informatics, Social-Media Networks, Images from High Definition TV, data from Mobile Networks, Banking data from ATM Machines, Genomics and GPS Trails, Telemetry from automobiles, Meteorology, Financial market data etc. Data Scientists confirm that 80% of the data that we have gathered today are in unstructured format, i.e. in the form of images, pixel data, Videos, geo-spatial data, PDF files etc. Because of the massive growth of data and its different formats, organizations are having multiple challenges in capturing, storing, mining, analyzing, and visualizing the Big data. This paper aims to exemplify the key challenges faced by most organizations and the significance of implementing the emerging Big data techniques for effective extraction of business intelligence to make better and faster decisions


2020 ◽  
Author(s):  
Nureni Adeboye ◽  
◽  
Oyedunsi Olayiwola ◽  

Large data repositories or database management still remain a mirage and tough challenge to accomplish by most developing countries and establishments around the globe. This necessitates the need to improvise on the gathering of suitable data with a good spread to serve as a complement, in the absence of sufficient real-life data. Statisticians are increasingly posed with thought-provoking and even paradoxical questions, challenging our qualifications for entering the statistical paradises created by Big Data. Through classroom activities that involved both sourced real-life and simulated big data in R-environment, models were built and estimates obtained from the adopted techniques revealed the robustness of simulated datasets in a unified observation with improved significant values as reflected in the results. Students appreciated the use of such big data as it enhances their machine learning ability and the availability of sufficient data without delay.


2017 ◽  
Author(s):  
Erwan Bocher ◽  
Olivier Ertz

Despite most Spatial Data Infrastructures are offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. This is a general observation for any geospatial architecture as soon as open standards as those of the Open Geospatial Consortium (OGC) shall be applied. To improve the situation, this paper does focus on improvements at the portrayal interoperability side by considering standardization aspects. We propose two major redesign recommendations. First to consolidate the cartographic theory at the core of the OGC Symbology Encoding standard. Secondly to build the standard in a modular way so as to be ready to be extended with upcoming future cartographic requirements. Thus, we start by defining portrayal interoperability by means of typical use cases that frame the concept of sharing cartography. Then we bring to light the strengths and limits of the relevant open standards to consider in this context. Finally we propose a set of recommendations to overcome the limits so as to make these use cases a true reality. Even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations.


2020 ◽  
Vol 1 ◽  
pp. 1-23
Author(s):  
Majid Hojati ◽  
Colin Robertson

Abstract. With new forms of digital spatial data driving new applications for monitoring and understanding environmental change, there are growing demands on traditional GIS tools for spatial data storage, management and processing. Discrete Global Grid System (DGGS) are methods to tessellate globe into multiresolution grids, which represent a global spatial fabric capable of storing heterogeneous spatial data, and improved performance in data access, retrieval, and analysis. While DGGS-based GIS may hold potential for next-generation big data GIS platforms, few of studies have tried to implement them as a framework for operational spatial analysis. Cellular Automata (CA) is a classic dynamic modeling framework which has been used with traditional raster data model for various environmental modeling such as wildfire modeling, urban expansion modeling and so on. The main objectives of this paper are to (i) investigate the possibility of using DGGS for running dynamic spatial analysis, (ii) evaluate CA as a generic data model for dynamic phenomena modeling within a DGGS data model and (iii) evaluate an in-database approach for CA modelling. To do so, a case study into wildfire spread modelling is developed. Results demonstrate that using a DGGS data model not only provides the ability to integrate different data sources, but also provides a framework to do spatial analysis without using geometry-based analysis. This results in a simplified architecture and common spatial fabric to support development of a wide array of spatial algorithms. While considerable work remains to be done, CA modelling within a DGGS-based GIS is a robust and flexible modelling framework for big-data GIS analysis in an environmental monitoring context.


2019 ◽  
Vol 3 (2) ◽  
pp. 51
Author(s):  
Bing Gao ◽  
Suqin Dong

<p>Human beings have entered the era of big data. China has made remarkable achievements in information technology. In this situation, the libraries of higher vocational colleges are also facing new opportunities and challenges, and the construction and management of libraries have been greatly affected. The traditional library management and operation mode have been unable to meet the needs of users. With the development and popularization of information technology, the management of library also needs continuous innovation and development. Some libraries in China have introduced information technology into their own management and construction. In this new development situation, library managers should seriously consider how to grasp the historical opportunity and deal with various future challenges more effectively under the background of the era of big data and information. This paper will analyze the library management and construction mode of higher vocational colleges, and discuss how to carry out the innovation of library management mode under the background of big data.</p>


2017 ◽  
Vol 13 (02) ◽  
pp. 159-180 ◽  
Author(s):  
Mihai Horia Zaharia

Presented in this paper is a possible solution for speeding up the integration of various data in the big data mainstream. The data enrichment and convergence of all possible sources is still at the beginning. As a result, existing techniques must be retooled in order to increase the integration of already existing databases or of the ones specific to Internet of Things in order to use the advantages of the big data to fulfill the final goal of web of data creation. In this paper, semantic web-specific solutions are used to design a system based on intelligent agents. It tries to solve some problems specific to automation of the database migration system with the final goal of creating a common ontology over various data repositories or producers in order to integrate them into systems based on big data architecture.


Sign in / Sign up

Export Citation Format

Share Document