automatic modeling
Recently Published Documents


TOTAL DOCUMENTS

196
(FIVE YEARS 29)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Vol 13 (17) ◽  
pp. 3512
Author(s):  
Fei Wang ◽  
Zhendong Liu ◽  
Hongchun Zhu ◽  
Pengda Wu

Common methods of filling open holes first reaggregate them into closed holes and then use a closed hole filling method to repair them. These methods have problems such as long calculation times, high memory consumption, and difficulties in filling large-area open holes. Hence, this paper proposes a parallel method for open hole filling in large-scale 3D automatic modeling. First, open holes are automatically identified and divided into two categories (internal and external). Second, the hierarchical relationships between the open holes are calculated in accordance with the adjacency relationships between partitioning cells, and the open holes are filled through propagation from the outer level to the inner level with topological closure and height projection transformation. Finally, the common boundaries between adjacent open holes are smoothed based on the Laplacian algorithm to achieve natural transitions between partitioning cells. Oblique photography data from an area of 28 km2 in Dongying, Shandong, were used for validation. The experimental results reveal the following: (i) Compared to the Han method, the proposed approach has a 12.4% higher filling success rate for internal open holes and increases the filling success rate for external open holes from 0% to 100%. (ii) Concerning filling efficiency, the Han method can achieve hole filling only in a small area, whereas with the proposed method, the size of the reconstruction area is not restricted. The time and memory consumption are improved by factors of approximately 4–5 and 7–21, respectively. (iii) In terms of filling accuracy, the two methods are basically the same.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 436
Author(s):  
Dietmar Bauer ◽  
Rainer Buschmeier

This paper investigates the asymptotic properties of estimators obtained from the so called CVA (canonical variate analysis) subspace algorithm proposed by Larimore (1983) in the case when the data is generated using a minimal state space system containing unit roots at the seasonal frequencies such that the yearly difference is a stationary vector autoregressive moving average (VARMA) process. The empirically most important special cases of such data generating processes are the I(1) case as well as the case of seasonally integrated quarterly or monthly data. However, increasingly also datasets with a higher sampling rate such as hourly, daily or weekly observations are available, for example for electricity consumption. In these cases the vector error correction representation (VECM) of the vector autoregressive (VAR) model is not very helpful as it demands the parameterization of one matrix per seasonal unit root. Even for weekly series this amounts to 52 matrices using yearly periodicity, for hourly data this is prohibitive. For such processes estimation using quasi-maximum likelihood maximization is extremely hard since the Gaussian likelihood typically has many local maxima while the parameter space often is high-dimensional. Additionally estimating a large number of models to test hypotheses on the cointegrating rank at the various unit roots becomes practically impossible for weekly data, for example. This paper shows that in this setting CVA provides consistent estimators of the transfer function generating the data, making it a valuable initial estimator for subsequent quasi-likelihood maximization. Furthermore, the paper proposes new tests for the cointegrating rank at the seasonal frequencies, which are easy to compute and numerically robust, making the method suitable for automatic modeling. A simulation study demonstrates by example that for processes of moderate to large dimension the new tests may outperform traditional tests based on long VAR approximations in sample sizes typically found in quarterly macroeconomic data. Further simulations show that the unit root tests are robust with respect to different distributions for the innovations as well as with respect to GARCH-type conditional heteroskedasticity. Moreover, an application to Kaggle data on hourly electricity consumption by different American providers demonstrates the usefulness of the method for applications. Therefore the CVA algorithm provides a very useful initial guess for subsequent quasi maximum likelihood estimation and also delivers relevant information on the cointegrating ranks at the different unit root frequencies. It is thus a useful tool for example in (but not limited to) automatic modeling applications where a large number of time series involving a substantial number of variables need to be modelled in parallel.


2021 ◽  
Author(s):  
Ivan Vorobevskii ◽  
Rico Kronenberg ◽  
Christian Bernhofer

Abstract The recently presented Global BROOK90 automatic modeling framework combines a non-calibrated lumped hydrological model with ERA5 reanalysis data as the main driver, as well as with global elevation, land cover and soil datasets. The focus is to simulate the water fluxes within the soil–water–plant system of a single plot or of a small catchment especially in data-scarce regions. The comparison to runoff is an obvious choice for the validation of this approach. Thus, we choose for validation 190 small catchments (with a median size of 64 km2) with discharge observations available within a time period of 1979–2020 and located all over the globe. They represent a wide range of relief, land cover and soil types within all climate zones. The simulation performance was analyzed with standard skill-score criteria: Nash–Sutcliffe Efficiency, Kling–Gupta Efficiency, Kling–Gupta Efficiency Skill Score and Mean Absolute Error. Overall, the framework performed well (better than mean flow prediction) in more than 75% of the cases (KGESS > 0) and significantly better on a monthly rather than on a daily scale. Furthermore, it was found that Global BROOK90 outperforms GloFAS-ERA5 discharge reanalysis. Additionally, cluster analysis revealed that some of the catchment characteristics have a significant influence on the framework performance. HIGHTLIGHTS The study evaluates the runoff component performance of the Global BROOK90 automatic framework for hydrological modeling. Discharge observations from 190 small catchments located all over the globe were used. Satisfactory results for more than 75% of the catchments were achieved. KGE decomposition and influence of catchment characteristics on the framework performance were discussed.


Author(s):  
Dongsheng Li ◽  
Jiepeng Liu ◽  
Liang Feng ◽  
Yang Zhou ◽  
Hongtuo Qi ◽  
...  
Keyword(s):  

2020 ◽  
Vol 221 ◽  
pp. 111030
Author(s):  
Júlio Tenório Pimentel ◽  
Adriano Dayvson Marques Ferreira ◽  
Renato de Siqueira Motta ◽  
Marco Antonio Figueiroa da Silva Cabral ◽  
Silvana Maria Bastos Afonso ◽  
...  

2020 ◽  
Vol 1631 ◽  
pp. 012011
Author(s):  
Lu Han ◽  
Xianjun Shi ◽  
Taoyu Wang

Sign in / Sign up

Export Citation Format

Share Document