scholarly journals Reliabilitas Prediksi Curah Hujan Dasarian Pada Kejadian Curah Hujan Ekstrim Pemicu Banjir 26 Oktober 2020 di Kebumen: Model Statistik (HyBMG) versus Model Dinamik (ECMWF)

2021 ◽  
Vol 4 (2) ◽  
pp. 83-100
Author(s):  
Zauyik Nana Ruslana ◽  
Restu Tresnawati ◽  
Rosyidah Rosyidah ◽  
Iis Widya Harmoko ◽  
Siswanto Siswanto

Kejadian banjir di Kabupaten Kebumen tanggal 26 Oktober 2020 dipicu oleh hujan dengan intensitas sangat lebat hingga ekstrim yang berlangsung sejak Minggu (25 Oktober 2020) sore hingga Senin (26 Oktober 2020). Beberapa pos pengamatan hujan (kerjasama) menunjukkan curah hujan >150mm/hari (kategori ekstrim) dalam rentang waktu hujan tanggal 24-26 Oktober 2020. Analisis curah hujan kumulatif dasarian ke-III bulan Oktober 2020 di wilayah Kabupaten Kebumen menunjukkan curah hujan >300mm/dasarian (kriteria sangat tinggi). Sebelumnya, pada dasarian ini BMKG Stasiun Klimatologi Semarang memprakirakan sebagian besar wilayah Kabupaten Kebumen diprakirakan dalam kriteria menengah dengan curah hujan berkisar antara 101-150mm/dasarian. Berdasarkan laporan yang masuk ke BPBD Kebumen sedikitnya 25 desa di 7 kecamatan terendam banjir karena beberapa sungai yang ada di Kebumen meluap. Paper ini bertujuan menguji keandalan prakiraan curah hujan dasarian operasional dengan membandingkan luaran prakiraan dengan data observasi pada kondisi ekstrem tersebut. Uji sensitivitas model univariat HyBMG dan ECMWF dilakukan dengan metode visual kesesuaian spasial, korelasi sederhana dan RMSE. Hasil analisis menunjukkan nilai luaran prakiraan ECMWF memiliki nilai RMSE terkecil namun dengan nilai korelasi yang negatif. Korelasi kuat diperoleh dari metode ANFIS dengan nilai RMSE sebesar 556,5. Dapat disimpulkan bahwa luaran model ANFIS memiliki tingkat sensitivitas luaran prakiraan yang lebih handal untuk kejadian hujan ekstrim pada hari Minggu (25 Oktober 2020) di Kabupaten Kebumen. Metode HyBMG memerlukan penambahan input data series lebih banyak lagi sehingga informasi yang terkumpul lebih banyak dan data grid luaran ECMWF menjadi lebih rapat, diharapkan dapat menghasilkan nilai prediksi yang lebih baik lagi.

2021 ◽  
Author(s):  
Manuela Seitz ◽  
Mathis Bloßfeld ◽  
Matthias Glomsda ◽  
Detlef Angermann

<p>The new ITRS realization, the ITRF2020, will be computed and released in 2021. Many institutions contributing to the international IAG services IGS, IVS, ILRS and IDS did work hard during the last months to finalize the ITRF2020 input data until mid of February 2021. The resulting data are series of SINEX files of daily or weekly global GNSS, VLBI, SLR and DORIS solutions. The ITRS Combination Centres (CC) are in charge of the computation of three ITRS realizations based on a combination of these input data. The three realizations can be seen as independent to some extent, as the combination strategies realized by the three CC partly differ considerably. This provides the opportunity of a cross-validation between the computed frames and ensures a high reliability of the final ITRF product. The ITRS CC will start in February 2021 with the analysis of the final input data series and their combination.<br>We will present first results of the analyses and computations performed at ITRS CC DGFI-TUM.</p>


2020 ◽  

The objective of precision beekeeping is to minimize resource consumption and maximize productivity of bees. This is achieved by detecting and predicting beehive states by monitoring apiary and beehive related parameters like temperature, weight, humidity, noise, vibrations, air pollution, wind, precipitation, etc. These parameters are collected as a raw input data by use of multiple different sensory devices, and are often imperfect and require creation of correlation between time data series. Currently, most researches focus on monitoring and processing each parameter separately, whereas combination of multiple parameters produces information that is more sophisticated. Raw input data sets that complement one another could be pre-processed by applying data fusion methods to achieve understanding about global research subject. There are multiple data fusion methods and classification models, distinguished by raw input data type or device usage, whereas sensor related data fusion is called sensor fusion. This paper analyses existing data fusion methods and process in order to identify data fusion challenges and correlate them with precision beekeeping objectives. The research was conducted over a period of 5 months, starting from October, 2019 and was based on analysis and synthesis of scientific literature. The conclusion was made that requirement of data fusion appliance in precision beekeeping is determined by a global research objective, whereas input data introduces main challenges of data and sensor fusion, as its attributes correlate with potential result.


2012 ◽  
Vol 1 (2) ◽  
pp. 198
Author(s):  
Panchal Amitkumar Mansukhbhai ◽  
Dr. Jayeshkumar Madhubhai Patel

The stock market is a complex and dynamic system with noisy, non-stationary and chaotic data series. Prediction of a financial market is more challenging due to chaos and uncertainty of the system. Soft computing techniques are progressively gaining presence in the financial world. Compared to traditional techniques to predict the market direction, soft computing is gaining the advantage of accuracy and speed. However the input data selection is the major issue in soft computing. The aim of this paper is to explain the potential day by day research contribution of soft computing to solve complex problem such as stock market direction prediction. This study paper synthesizes five reference papers and explains how soft computing is gaining the popularity in the field of financial market. The selection of papers are based on various models wich are processing different input parameters for predicting the direction of stock market.


2021 ◽  
Author(s):  
Carles Beneyto ◽  
José Ángel Aranda ◽  
Félix Francés

<p>Stochastic Weather Generators (WG) have been extensively used in recent years for hydrologic modeling, among others. Compared to traditional approaches, the main advantage of using WGs is that they can produce synthetic continuous time series of weather data of unlimited length preserving their spatiotemporal distribution. Synthetic simulations are based on the statistical characteristics of the observed weather, thus, relying upon the length and spatial distribution of the input data series. In most cases, and especially in arid/semiarid regions, these are scarce, which makes it difficult for WGs to obtain reliable quantile estimates, particularly those associated with low-frequency events. The present study aims to explore the importance of the input weather data length in the performance of WGs, focusing on the adequate estimation of the higher quantiles, and quantifying their uncertainty.</p><p>An experimental case study consisting of nine rain gauges from the Spain02-v5 network in a 0.11º resolution covering an approximate area of 180 km<sup>2</sup> was implemented. The WG used for the experiment was GWEX, which includes a three-parameter (σ, κ, and ξ) cumulative distribution function (E-GPD) to model de precipitation amounts, being the shape parameter ξ the one directly governing the upper tail of the distribution function. A fictitious climate scenario of 15,000 was simulated fixing the ξ value to 0.11.  From this scenario, 50 realizations of 5,000 years with a different sample length (i.e. 30, 60, 90, 120, 150, 200, 300 years) were simulated for four different particular cases: (1) leaving the ξ value as default (i.e. 0.05); (2) estimating the ξ value from the observations; (3) calibrating the ξ value with the T = 100 years quantile from the 15,000 years; and (4) fixing the ξ value to the fictitious scenario value. Relative root mean square error (RRMSE) and coefficient of variation (CV) were calculated for each set of realizations and compared with the obtained from the fictitious climate scenario.</p><p>Preliminary results showed a clear reduction in the value of both the CV and the RRMSE with the increase of the sample length for the four particular cases, being this reduction more evident for the higher order quantiles and as we move from particular case (1) to (4). Furthermore, it was observed that there was not any significant improvement in the higher quantile estimates between the 200-yrs and the 300-yrs samples, concluding that there is a sample length threshold from which the estimates do not improve. Finally, even observing a clear improvement in all estimates when increasing the sample length, a systematic underestimation of the higher quantiles in all cases was still observed, which remarks the importance of seeking extra sources of information (e.g. regional max. Pd. studies) for a better parameterization of the WG, especially for arid/semiarid climates.</p>


Author(s):  
R.A. Ploc ◽  
G.H. Keech

An unambiguous analysis of transmission electron diffraction effects requires two samplings of the reciprocal lattice (RL). However, extracting definitive information from the patterns is difficult even for a general orthorhombic case. The usual procedure has been to deduce the approximate variables controlling the formation of the patterns from qualitative observations. Our present purpose is to illustrate two applications of a computer programme written for the analysis of transmission, selected area diffraction (SAD) patterns; the studies of RL spot shapes and epitaxy.When a specimen contains fine structure the RL spots become complex shapes with extensions in one or more directions. If the number and directions of these extensions can be estimated from an SAD pattern the exact spot shape can be determined by a series of refinements of the computer input data.


2020 ◽  
Vol 637 ◽  
pp. 117-140 ◽  
Author(s):  
DW McGowan ◽  
ED Goldstein ◽  
ML Arimitsu ◽  
AL Deary ◽  
O Ormseth ◽  
...  

Pacific capelin Mallotus catervarius are planktivorous small pelagic fish that serve an intermediate trophic role in marine food webs. Due to the lack of a directed fishery or monitoring of capelin in the Northeast Pacific, limited information is available on their distribution and abundance, and how spatio-temporal fluctuations in capelin density affect their availability as prey. To provide information on life history, spatial patterns, and population dynamics of capelin in the Gulf of Alaska (GOA), we modeled distributions of spawning habitat and larval dispersal, and synthesized spatially indexed data from multiple independent sources from 1996 to 2016. Potential capelin spawning areas were broadly distributed across the GOA. Models of larval drift show the GOA’s advective circulation patterns disperse capelin larvae over the continental shelf and upper slope, indicating potential connections between spawning areas and observed offshore distributions that are influenced by the location and timing of spawning. Spatial overlap in composite distributions of larval and age-1+ fish was used to identify core areas where capelin consistently occur and concentrate. Capelin primarily occupy shelf waters near the Kodiak Archipelago, and are patchily distributed across the GOA shelf and inshore waters. Interannual variations in abundance along with spatio-temporal differences in density indicate that the availability of capelin to predators and monitoring surveys is highly variable in the GOA. We demonstrate that the limitations of individual data series can be compensated for by integrating multiple data sources to monitor fluctuations in distributions and abundance trends of an ecologically important species across a large marine ecosystem.


2019 ◽  
Vol 10 (2) ◽  
pp. 117-125
Author(s):  
Dana Kubíčková ◽  
◽  
Vladimír Nulíček ◽  

The aim of the research project solved at the University of Finance and administration is to construct a new bankruptcy model. The intention is to use data of the firms that have to cease their activities due to bankruptcy. The most common method for bankruptcy model construction is multivariate discriminant analyses (MDA). It allows to derive the indicators most sensitive to the future companies’ failure as a parts of the bankruptcy model. One of the assumptions for using the MDA method and reassuring the reliable results is the normal distribution and independence of the input data. The results of verification of this assumption as the third stage of the project are presented in this article. We have revealed that this assumption is met only in a few selected indicators. Better results were achieved in the indicators in the set of prosperous companies and one year prior the failure. The selected indicators intended for the bankruptcy model construction thus cannot be considered as suitable for using the MDA method.


Sign in / Sign up

Export Citation Format

Share Document