scholarly journals URBAN GEO BIG DATA

Author(s):  
M. A. Brovelli ◽  
P. Boccardo ◽  
G. Bordogna ◽  
A. Pepe ◽  
M. Crespi ◽  
...  

<p><strong>Abstract.</strong> The paper deals with the general presentation of the Urban GEO BIG DATA, a collaborative acentric and distributed Free and Open Source (FOS) platform consisting of several components: local data nodes for data and related service Web deploy; a visualization node for data fruition; a catalog node for data discovery; a CityGML modeler; data-rich viewers based on virtual globes; an INSPIRE metadata management system enriched with quality indicators for each dataset.</p><p>Three use cases in five Italian cities (Turin, Milan, Padua, Rome, and Naples) are examined: 1) urban mobility; 2) land cover and soil consumption at different resolutions; 3) displacement time series. Besides the case studies, the architecture of the system and its components will be presented.</p>

Author(s):  
Petrus Mursanto ◽  
Ari Wibisono ◽  
Wendy D.W. T. Bayu ◽  
Valian Fil Ahli ◽  
May Iffah Rizki ◽  
...  
Keyword(s):  
Big Data ◽  

2017 ◽  
Vol 9 (11) ◽  
pp. 1095 ◽  
Author(s):  
Emmihenna Jääskeläinen ◽  
Terhikki Manninen ◽  
Johanna Tamminen ◽  
Marko Laine

2017 ◽  
Vol 10 (2) ◽  
pp. 32 ◽  
Author(s):  
Dengqiu Li ◽  
Dengsheng Lu ◽  
Ming Wu ◽  
Xuexin Shao ◽  
Jinhong Wei

2021 ◽  
Vol 13 (9) ◽  
pp. 1743
Author(s):  
Daniel Paluba ◽  
Josef Laštovička ◽  
Antonios Mouratidis ◽  
Přemysl Štych

This study deals with a local incidence angle correction method, i.e., the land cover-specific local incidence angle correction (LC-SLIAC), based on the linear relationship between the backscatter values and the local incidence angle (LIA) for a given land cover type in the monitored area. Using the combination of CORINE Land Cover and Hansen et al.’s Global Forest Change databases, a wide range of different LIAs for a specific forest type can be generated for each scene. The algorithm was developed and tested in the cloud-based platform Google Earth Engine (GEE) using Sentinel-1 open access data, Shuttle Radar Topography Mission (SRTM) digital elevation model, and CORINE Land Cover and Hansen et al.’s Global Forest Change databases. The developed method was created primarily for time-series analyses of forests in mountainous areas. LC-SLIAC was tested in 16 study areas over several protected areas in Central Europe. The results after correction by LC-SLIAC showed a reduction of variance and range of backscatter values. Statistically significant reduction in variance (of more than 40%) was achieved in areas with LIA range >50° and LIA interquartile range (IQR) >12°, while in areas with low LIA range and LIA IQR, the decrease in variance was very low and statistically not significant. Six case studies with different LIA ranges were further analyzed in pre- and post-correction time series. Time-series after the correction showed a reduced fluctuation of backscatter values caused by different LIAs in each acquisition path. This reduction was statistically significant (with up to 95% reduction of variance) in areas with a difference in LIA greater than or equal to 27°. LC-SLIAC is freely available on GitHub and GEE, making the method accessible to the wide remote sensing community.


Author(s):  
А.И. Сотников

То, с какой скоростью человечество накапливает информацию ежедневно, и непредсказуемость завтрашнего дня показывают, что для прогнозирования временных рядов больших данных уже не хватает традиционных технологий и необходимы новые методы обработки. В связи с этим встает вопрос, какие методы возможно использовать в настоящее время для получения достоверного прогнозирования временных рядов больших данных? The speed at which humanity is accumulating information on a daily basis and the unpredictability of tomorrow show that traditional technologies are no longer enough for forecasting big data time series and new processing methods are needed. In this regard, the question arises, what methods can be used at present to obtain reliable forecasting of time series of big data?


2019 ◽  
Vol 36 (1) ◽  
pp. 25-39 ◽  
Author(s):  
David Egan ◽  
Natalie Claire Haynes

PurposeThe purpose of this paper is to investigate the perceptions that managers have of the value and reliability of using big data to make hotel revenue management and pricing decisions.Design/methodology/approachA three-stage iterative thematic analysis technique based on the approaches of Braun and Clarke (2006) and Nowell et al. (2017) and using different research instruments to collect and analyse qualitative data at each stage was used to develop an explanatory framework.FindingsWhilst big data-driven automated revenue systems are technically capable of making pricing and inventory decisions without user input, the findings here show that the reality is that managers still interact with every stage of the revenue and pricing process from data collection to the implementation of price changes. They believe that their personal insights are as valid as big data in increasing the reliability of the decision-making process. This is driven primarily by a lack of trust on the behalf of managers in the ability of the big data systems to understand and interpret local market and customer dynamics.Practical implicationsThe less a manager believes in the ability of those systems to interpret these data, the more they perceive gut instinct to increase the reliability of their decision making and the less they conduct an analysis of the statistical data provided by the systems. This provides a clear message that there appears to be a need for automated revenue systems to be flexible enough for managers to import the local data, information and knowledge that they believe leads to revenue growth.Originality/valueThere is currently little research explicitly investigating the role of big data in decision making within hotel revenue management and certainly even less focussing on decision making at property level and the perceptions of managers of the value of big data in increasing the reliability of revenue and pricing decision making.


2017 ◽  
Vol 13 (7) ◽  
pp. 155014771772181 ◽  
Author(s):  
Seok-Woo Jang ◽  
Gye-Young Kim

This article proposes an intelligent monitoring system for semiconductor manufacturing equipment, which determines spec-in or spec-out for a wafer in process, using Internet of Things–based big data analysis. The proposed system consists of three phases: initialization, learning, and prediction in real time. The initialization sets the weights and the effective steps for all parameters of equipment to be monitored. The learning performs a clustering to assign similar patterns to the same class. The patterns consist of a multiple time-series produced by semiconductor manufacturing equipment and an after clean inspection measured by the corresponding tester. We modify the Line, Buzo, and Gray algorithm for classifying the time-series patterns. The modified Line, Buzo, and Gray algorithm outputs a reference model for every cluster. The prediction compares a time-series entered in real time with the reference model using statistical dynamic time warping to find the best matched pattern and then calculates a predicted after clean inspection by combining the measured after clean inspection, the dissimilarity, and the weights. Finally, it determines spec-in or spec-out for the wafer. We will present experimental results that show how the proposed system is applied on the data acquired from semiconductor etching equipment.


Sign in / Sign up

Export Citation Format

Share Document