scholarly journals STANDARDS-BASED SERVICES FOR BIG SPATIO-TEMPORAL DATA

Author(s):  
P. Baumann ◽  
V. Merticariu ◽  
A. Dumitru ◽  
D. Misev

With the unprecedented availability of continuously updated measured and generated data there is an immense potential for getting new and timely insights – yet, the value is not fully leveraged as of today. The quest is up for high-level service interfaces for dissecting datasets and rejoining them with other datasets – ultimately, to allow users to ask "any question, anytime, on any size" enabling them to "build their own product on the go". <br><br> With OGC Coverages, a concrete, interoperable data model has been established which unifies n-D spatio-temporal regular and irregular grids, point clouds, and meshes. The Web Coverage Service (WCS) suite provides versatile streamlined coverage functionality ranging from simple access to flexible spatio-temporal analytics. Flexibility and scalability of the WCS suite has been demonstrated in practice through massive services run by large-scale data centers. We present the current status in OGC Coverage data and service models, contrast them to related work, and describe a scalable implementation based on the rasdaman array engine.

Author(s):  
P. Baumann ◽  
V. Merticariu ◽  
A. Dumitru ◽  
D. Misev

With the unprecedented availability of continuously updated measured and generated data there is an immense potential for getting new and timely insights &ndash; yet, the value is not fully leveraged as of today. The quest is up for high-level service interfaces for dissecting datasets and rejoining them with other datasets &ndash; ultimately, to allow users to ask "any question, anytime, on any size" enabling them to "build their own product on the go". <br><br> With OGC Coverages, a concrete, interoperable data model has been established which unifies n-D spatio-temporal regular and irregular grids, point clouds, and meshes. The Web Coverage Service (WCS) suite provides versatile streamlined coverage functionality ranging from simple access to flexible spatio-temporal analytics. Flexibility and scalability of the WCS suite has been demonstrated in practice through massive services run by large-scale data centers. We present the current status in OGC Coverage data and service models, contrast them to related work, and describe a scalable implementation based on the rasdaman array engine.


2016 ◽  
Author(s):  
John W. Williams ◽  
◽  
Simon Goring ◽  
Eric Grimm ◽  
Jason McLachlan

2021 ◽  
Vol 4 (1) ◽  
pp. 3-14
Author(s):  
Zdzislaw Polkowski ◽  
◽  
Sambit Kumar Mishra ◽  

In a general scenario, the approaches linked to the innovation of large-scaled data seem ordinary; the informational measures of such aspects can differ based on the applications as these are associated with different attributes that may support high data volumes high data quality. Accordingly, the challenges can be identified with an assurance of high-level protection and data transformation with enhanced operation quality. Based on large-scale data applications in different virtual servers, it is clear that the information can be measured by enlisting the sources linked to sensors networked and provisioned by the analysts. Therefore, it is very much essential to track the relevance and issues with enormous information. While aiming towards knowledge extraction, applying large-scaled data may involve the analytical aspects to predict future events. Accordingly, the soft computing approach can be implemented in such cases to carry out the analysis. During the analysis of large-scale data, it is essential to abide by the rules associated with security measures because preserving sensitive information is the biggest challenge while dealing with large-scale data. As high risk is observed in such data analysis, security measures can be enhanced by having provisioned with authentication and authorization. Indeed, the major obstacles linked to the techniques while analyzing the data are prohibited during security and scalability. The integral methods towards application on data possess a better impact on scalability. It is observed that the faster scaling factor of data on the processor embeds some processing elements to the system. Therefore, it is required to address the challenges linked to processors correlating with process visualization and scalability.


2019 ◽  
Vol 8 (9) ◽  
pp. 389
Author(s):  
Xinliang Liu ◽  
Yi Wang ◽  
Yong Li ◽  
Jinshui Wu

The integrated recognition of spatio-temporal characteristics (e.g., speed, interaction with surrounding areas, and driving forces) of urbanization facilitates regional comprehensive development. In this study, a large-scale data-driven approach was formed for exploring the township urbanization process. The approach integrated logistic models to quantify urbanization speed, partial triadic analysis to reveal dynamic relationships between rural population migration and urbanization, and random forest analysis to identify the response of urbanization to spatial driving forces. A typical subtropical town was chosen to verify the approach by quantifying the spatio-temporal process of township urbanization from 1933 to 2012. The results showed that (i) urbanization speed was well reflected by the changes of time-course areas of urban cores fitted by a four-parameter logistic equation (R2 = 0.95–1.00, p < 0.001), and the relatively fast and steady developing periods were also successfully predicted, respectively; (ii) the spatio-temporal sprawl of urban cores and their interactions with the surrounding rural residential areas were well revealed and implied that the town experienced different historically aggregating and splitting trajectories; and (iii) the key drivers (township merger, elevation and distance to roads, as well as population migration) were identified in the spatial sprawl of urban cores. Our findings proved that a comprehensive approach is powerful for quantifying the spatio-temporal characteristics of the urbanization process at the township level and emphasized the importance of applying long-term historical data when researching the urbanization process.


Concussion ◽  
2020 ◽  
Vol 5 (3) ◽  
pp. CNC76 ◽  
Author(s):  
James Mooney ◽  
Mitchell Self ◽  
Karim ReFaey ◽  
Galal Elsayed ◽  
Gustavo Chagoya ◽  
...  

Sports-related concussion has been examined extensively in collision sports such as football and hockey. However, historically, lower-risk contact sports such as soccer have only more recently garnered increased attention. Here, we review articles examining the epidemiology, injury mechanisms, sex differences, as well as the neurochemical, neurostructural and neurocognitive changes associated with soccer-related concussion. From 436 titles and abstracts, 121 full texts were reviewed with a total of 64 articles identified for inclusion. Concussion rates are higher during competitions and in female athletes with purposeful heading rarely resulting in concussion. Given a lack of high-level studies examining sports-related concussion in soccer, clinicians and scientists must focus research efforts on large-scale data gathering and development of improved technologies to better detect and understand concussion.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Tuozhong Yao ◽  
Wenfeng Wang ◽  
Yuhong Gu ◽  
Qiuguo Zhu

Multiview active learning (MVAL) is a technique which can result in a large decrease in the size of the version space than traditional active learning and has great potential applications in large-scale data analysis. This paper made research on MVAL-based scene classification for helping the computer accurately understand diverse and complex environments macroscopically, which has been widely used in many fields such as image retrieval and autonomous driving. The main contribution of this paper is that different high-level image semantics are used for replacing the traditional low-level features to generate more independent and diverse hypotheses in MVAL. First, our algorithm uses different object detectors to achieve local object responses in the scenes. Furthermore, we design a cascaded online LDA model for mining the theme semantic of an image. The experimental results demonstrate that our proposed theme modeling strategy fits the large-scale data learning, and our MVAL algorithm with both high-level semantic views can achieve significant improvement in the scene classification than traditional active learning-based algorithms.


2009 ◽  
Vol 28 (11) ◽  
pp. 2737-2740
Author(s):  
Xiao ZHANG ◽  
Shan WANG ◽  
Na LIAN

Sign in / Sign up

Export Citation Format

Share Document