scholarly journals Prediksi Tingkat Inflasi Bulanan Indonesia Menggunakan Metode Jaringan Saraf Tiruan

2021 ◽  
Vol 11 (2) ◽  
pp. 152-167
Author(s):  
B Hauriza ◽  
Muladi Muladi ◽  
I M Wirawan

Bank Indonesia mendefinisikan inflasi merupakan meningkatkan harga-harga secara umum dan terus-menerus. Kenaikan harga barang dan jasa dapat disebut inflasi apabila kenaikan tersebut meluas atau mempengaruhi kenaikan harga lainnya. Naiknya harga barang dan jasa tersebut dapat menyebabkan turunnya nilai uang. Dengan ini, inflasi dapat menurunkan nilai uang terhadap nilai barang dan jasa secara umum. Jika inflasi yang terjadi dapat dikendalikan dengan baik, tingkat inflasi tersebut dapat memberikan dampak positif terhadap pertumbuhan ekonomi. Tujuan dari penelitian ini yaitu dapat memprediksi tingkat inflasi agar inflasi dapat dikontrol tiap bulannya dan dapat meberikan dampak yang positif. Penelitian ini menggunakan metode jaringan syaraf tiruan yang sesuai digunakan pada data time series dengan data training. Data yang digunakan adalah data inflasi bulanan kelompok pengeluaran dari bulan Desember 2011 sampai Desember Januari 2020 diambil dari Badan Pusat Statistik. Penelitian ini diharapkan dapat membantu untuk memutuskan tindakan yang tepat berdasarkan hasil prediksi. Pengujian menggunakan beberapa model diperoleh hasil terbaik dari model dengan konfigurasi 7-15-1 dengan learning rate 0,01 yang menghasilkan MSE sebesar 0,026. Hasil ini menunjukkan bahwa jaringan syaraf tiruan dapat digunakan untuk prediksi inflasi dengan akurasi yang tinggi.

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4638
Author(s):  
Bummo Koo ◽  
Jongman Kim ◽  
Yejin Nam ◽  
Youngho Kim

In this study, algorithms to detect post-falls were evaluated using the cross-dataset according to feature vectors (time-series and discrete data), classifiers (ANN and SVM), and four different processing conditions (normalization, equalization, increase in the number of training data, and additional training with external data). Three-axis acceleration and angular velocity data were obtained from 30 healthy male subjects by attaching an IMU to the middle of the left and right anterior superior iliac spines (ASIS). Internal and external tests were performed using our lab dataset and SisFall public dataset, respectively. The results showed that ANN and SVM were suitable for the time-series and discrete data, respectively. The classification performance generally decreased, and thus, specific feature vectors from the raw data were necessary when untrained motions were tested using a public dataset. Normalization made SVM and ANN more and less effective, respectively. Equalization increased the sensitivity, even though it did not improve the overall performance. The increase in the number of training data also improved the classification performance. Machine learning was vulnerable to untrained motions, and data of various movements were needed for the training.


Author(s):  
Weida Zhong ◽  
Qiuling Suo ◽  
Abhishek Gupta ◽  
Xiaowei Jia ◽  
Chunming Qiao ◽  
...  

With the popularity of smartphones, large-scale road sensing data is being collected to perform traffic prediction, which is an important task in modern society. Due to the nature of the roving sensors on smartphones, the collected traffic data which is in the form of multivariate time series, is often temporally sparse and unevenly distributed across regions. Moreover, different regions can have different traffic patterns, which makes it challenging to adapt models learned from regions with sufficient training data to target regions. Given that many regions may have very sparse data, it is also impossible to build individual models for each region separately. In this paper, we propose a meta-learning based framework named MetaTP to overcome these challenges. MetaTP has two key parts, i.e., basic traffic prediction network (base model) and meta-knowledge transfer. In base model, a two-layer interpolation network is employed to map original time series onto uniformly-spaced reference time points, so that temporal prediction can be effectively performed in the reference space. The meta-learning framework is employed to transfer knowledge from source regions with a large amount of data to target regions with a few data examples via fast adaptation, in order to improve model generalizability on target regions. Moreover, we use two memory networks to capture the global patterns of spatial and temporal information across regions. We evaluate the proposed framework on two real-world datasets, and experimental results show the effectiveness of the proposed framework.


Author(s):  
H. Miyazaki ◽  
M. Nagai ◽  
R. Shibasaki

Methodology of automated human settlement mapping is highly needed for utilization of historical satellite data archives for urgent issues of urban growth in global scale, such as disaster risk management, public health, food security, and urban management. As development of global data with spatial resolution of 10-100 m was achieved by some initiatives using ASTER, Landsat, and TerraSAR-X, next goal has targeted to development of time-series data which can contribute to studies urban development with background context of socioeconomy, disaster risk management, public health, transport and other development issues. We developed an automated algorithm to detect human settlement by classification of built-up and non-built-up in time-series Landsat images. A machine learning algorithm, Local and Global Consistency (LLGC), was applied with improvements for remote sensing data. The algorithm enables to use MCD12Q1, a MODIS-based global land cover map with 500-m resolution, as training data so that any manual process is not required for preparation of training data. In addition, we designed the method to composite multiple results of LLGC into a single output to reduce uncertainty. The LLGC results has a confidence value ranging 0.0 to 1.0 representing probability of built-up and non-built-up. The median value of the confidence for a certain period around a target time was expected to be a robust output of confidence to identify built-up or non-built-up areas against uncertainties in satellite data quality, such as cloud and haze contamination. Four scenes of Landsat data for each target years, 1990, 2000, 2005, and 2010, were chosen among the Landsat archive data with cloud contamination less than 20%.We developed a system with the algorithms on the Data Integration and Analysis System (DIAS) in the University of Tokyo and processed 5200 scenes of Landsat data for cities with more than one million people worldwide.


Author(s):  
H. Miyazaki ◽  
M. Nagai ◽  
R. Shibasaki

Methodology of automated human settlement mapping is highly needed for utilization of historical satellite data archives for urgent issues of urban growth in global scale, such as disaster risk management, public health, food security, and urban management. As development of global data with spatial resolution of 10-100 m was achieved by some initiatives using ASTER, Landsat, and TerraSAR-X, next goal has targeted to development of time-series data which can contribute to studies urban development with background context of socioeconomy, disaster risk management, public health, transport and other development issues. We developed an automated algorithm to detect human settlement by classification of built-up and non-built-up in time-series Landsat images. A machine learning algorithm, Local and Global Consistency (LLGC), was applied with improvements for remote sensing data. The algorithm enables to use MCD12Q1, a MODIS-based global land cover map with 500-m resolution, as training data so that any manual process is not required for preparation of training data. In addition, we designed the method to composite multiple results of LLGC into a single output to reduce uncertainty. The LLGC results has a confidence value ranging 0.0 to 1.0 representing probability of built-up and non-built-up. The median value of the confidence for a certain period around a target time was expected to be a robust output of confidence to identify built-up or non-built-up areas against uncertainties in satellite data quality, such as cloud and haze contamination. Four scenes of Landsat data for each target years, 1990, 2000, 2005, and 2010, were chosen among the Landsat archive data with cloud contamination less than 20%.We developed a system with the algorithms on the Data Integration and Analysis System (DIAS) in the University of Tokyo and processed 5200 scenes of Landsat data for cities with more than one million people worldwide.


2022 ◽  
pp. 266-282
Author(s):  
Lei Zhang

In this research, artificial neural networks (ANN) with various architectures are trained to generate the chaotic time series patterns of the Lorenz attractor. The ANN training performance is evaluated based on the size and precision of the training data. The nonlinear Auto-Regressive (NAR) model is trained in open loop mode first. The trained model is then used with closed loop feedback to predict the chaotic time series outputs. The research goal is to use the designed NAR ANN model for the simulation and analysis of Electroencephalogram (EEG) signals in order to study brain activities. A simple ANN topology with a single hidden layer of 3 to 16 neurons and 1 to 4 input delays is used. The training performance is measured by averaged mean square error. It is found that the training performance cannot be improved by solely increasing the training data size. However, the training performance can be improved by increasing the precision of the training data. This provides useful knowledge towards reducing the number of EEG data samples and corresponding acquisition time for prediction.


2020 ◽  
Vol 12 (18) ◽  
pp. 3091
Author(s):  
Shuai Xie ◽  
Liangyun Liu ◽  
Jiangning Yang

Percentile features derived from Landsat time-series data are widely adopted in land-cover classification. However, the temporal distribution of Landsat valid observations is highly uneven across different pixels due to the gaps resulting from clouds, cloud shadows, snow, and the scan line corrector (SLC)-off problem. In addition, when applying percentile features, land-cover change in time-series data is usually not considered. In this paper, an improved percentile called the time-series model (TSM)-adjusted percentile is proposed for land-cover classification based on Landsat data. The Landsat data were first modeled using three different time-series models, and the land-cover changes were continuously monitored using the continuous change detection (CCD) algorithm. The TSM-adjusted percentiles for stable pixels were then derived from the synthetic time-series data without gaps. Finally, the TSM-adjusted percentiles were used for generating supervised random forest classifications. The proposed methods were implemented on Landsat time-series data of three study areas. The classification results were compared with those obtained using the original percentiles derived from the original time-series data with gaps. The results show that the land-cover classifications obtained using the proposed TSM-adjusted percentiles have significantly higher overall accuracies than those obtained using the original percentiles. The proposed method was more effective for forest types with obvious phenological characteristics and with fewer valid observations. In addition, it was also robust to the training data sampling strategy. Overall, the methods proposed in this work can provide accurate characterization of land cover and improve the overall classification accuracy based on such metrics. The findings are promising for percentile-based land cover classification using Landsat time series data, especially in the areas with frequent cloud coverage.


Forests ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 1040 ◽  
Author(s):  
Kai Cheng ◽  
Juanle Wang

Efficient methodologies for mapping forest types in complicated mountain areas are essential for the implementation of sustainable forest management practices and monitoring. Existing solutions dedicated to forest-type mapping are primarily focused on supervised machine learning algorithms (MLAs) using remote sensing time-series images. However, MLAs are challenged by complex and problematic forest type compositions, lack of training data, loss of temporal data caused by clouds obscuration, and selection of input feature sets for mountainous areas. The time-weighted dynamic time warping (TWDTW) is a supervised classifier, an adaptation of the dynamic time warping method for time series analysis for land cover classification. This study evaluates the performance of the TWDTW method that uses a combination of Sentinel-2 and Landsat-8 time-series images when applied to complicated mountain forest-type classifications in southern China with complex topographic conditions and forest-type compositions. The classification outputs were compared to those produced by MLAs, including random forest (RF) and support vector machine (SVM). The results presented that the three forest-type maps obtained by TWDTW, RF, and SVM have high consistency in spatial distribution. TWDTW outperformed SVM and RF with mean overall accuracy and mean kappa coefficient of 93.81% and 0.93, respectively, followed by RF and SVM. Compared with MLAs, TWDTW method achieved the higher classification accuracy than RF and SVM, with even less training data. This proved the robustness and less sensitivities to training samples of the TWDTW method when applied to mountain forest-type classifications.


2008 ◽  
Vol 15 (6) ◽  
pp. 1013-1022 ◽  
Author(s):  
J. Son ◽  
D. Hou ◽  
Z. Toth

Abstract. Various statistical methods are used to process operational Numerical Weather Prediction (NWP) products with the aim of reducing forecast errors and they often require sufficiently large training data sets. Generating such a hindcast data set for this purpose can be costly and a well designed algorithm should be able to reduce the required size of these data sets. This issue is investigated with the relatively simple case of bias correction, by comparing a Bayesian algorithm of bias estimation with the conventionally used empirical method. As available forecast data sets are not large enough for a comprehensive test, synthetically generated time series representing the analysis (truth) and forecast are used to increase the sample size. Since these synthetic time series retained the statistical characteristics of the observations and operational NWP model output, the results of this study can be extended to real observation and forecasts and this is confirmed by a preliminary test with real data. By using the climatological mean and standard deviation of the meteorological variable in consideration and the statistical relationship between the forecast and the analysis, the Bayesian bias estimator outperforms the empirical approach in terms of the accuracy of the estimated bias, and it can reduce the required size of the training sample by a factor of 3. This advantage of the Bayesian approach is due to the fact that it is less liable to the sampling error in consecutive sampling. These results suggest that a carefully designed statistical procedure may reduce the need for the costly generation of large hindcast datasets.


2020 ◽  
Vol 12 (18) ◽  
pp. 2918
Author(s):  
Yang Liu ◽  
Ronggao Liu

Forest cover mapping based on multi-temporal satellite observations usually uses dozens of features as inputs, which requires huge training data and leads to many ill effects. In this paper, a simple but efficient approach was proposed to map forest cover from time series of satellite observations without using classifiers and training data. This method focuses on the key step of forest mapping, i.e., separation of forests from herbaceous vegetation, considering that the non-vegetated area can be easily identified by the annual maximum vegetation index. We found that the greenness of forests is generally stable during the maturity period, but a similar greenness plateau does not exist for herbaceous vegetation. It means that the mean greenness during the vegetation maturity period of forests should be larger than that of herbaceous vegetation, while its standard deviation should be smaller. A combination of these two features could identify forests with several thresholds. The proposed approach was demonstrated for mapping the extents of different forest types with MODIS observations. The results show that the overall accuracy ranges 91.92–95.34% and the Kappa coefficient is 0.84–0.91 when compared with the reference datasets generated from fine-resolution imagery of Google Earth. The proposed approach can greatly simplify the procedures of forest cover mapping.


Sign in / Sign up

Export Citation Format

Share Document