Penerapan Logika Samar dalam Peramalan Data Runtun Waktu

2011 ◽  
Vol 3 (2) ◽  
pp. 11-15
Author(s):  
Seng Hansun

Recently, there are so many soft computing methods been used in time series analysis. One of these methods is fuzzy logic system. In this paper, we will try to implement fuzzy logic system to predict a non-stationary time series data. The data we use here is Mackey-Glass chaotic time series. We also use MATLAB software to predict the time series data, which have been divided into four groups of input-output pairs. These groups then will be used as the input variables of the fuzzy logic system. There are two scenarios been used in this paper, first is by using seven fuzzy sets, and second is by using fifteen fuzzy sets. The result shows that the fuzzy system with fifteen fuzzy sets give a better forecasting result than the fuzzy system with seven fuzzy sets. Index Terms—forecasting, fuzzy logic, Mackey-Glass chaotic, MATLAB, time series analysis

2021 ◽  
Vol 13 (3) ◽  
pp. 1187
Author(s):  
Bokyong Shin ◽  
Mikko Rask

Online deliberation research has recently developed automated indicators to assess the deliberative quality of much user-generated online data. While most previous studies have developed indicators based on content analysis and network analysis, time-series data and associated methods have been studied less thoroughly. This article contributes to the literature by proposing indicators based on a combination of network analysis and time-series analysis, arguing that it will help monitor how online deliberation evolves. Based on Habermasian deliberative criteria, we develop six throughput indicators and demonstrate their applications in the OmaStadi participatory budgeting project in Helsinki, Finland. The study results show that these indicators consist of intuitive figures and visualizations that will facilitate collective intelligence on ongoing processes and ways to solve problems promptly.


2016 ◽  
Vol 50 (1) ◽  
pp. 41-57 ◽  
Author(s):  
Linghe Huang ◽  
Qinghua Zhu ◽  
Jia Tina Du ◽  
Baozhen Lee

Purpose – Wiki is a new form of information production and organization, which has become one of the most important knowledge resources. In recent years, with the increase of users in wikis, “free rider problem” has been serious. In order to motivate editors to contribute more to a wiki system, it is important to fully understand their contribution behavior. The purpose of this paper is to explore the law of dynamic contribution behavior of editors in wikis. Design/methodology/approach – After developing a dynamic model of contribution behavior, the authors employed both the metrological and clustering methods to process the time series data. The experimental data were collected from Baidu Baike, a renowned Chinese wiki system similar to Wikipedia. Findings – There are four categories of editors: “testers,” “dropouts,” “delayers” and “stickers.” Testers, who contribute the least content and stop contributing rapidly after editing a few articles. After editing a large amount of content, dropouts stop contributing completely. Delayers are the editors who do not stop contributing during the observation time, but they may stop contributing in the near future. Stickers, who keep contributing and edit the most content, are the core editors. In addition, there are significant time-of-day and holiday effects on the number of editors’ contributions. Originality/value – By using the method of time series analysis, some new characteristics of editors and editor types were found. Compared with the former studies, this research also had a larger sample. Therefore, the results are more scientific and representative and can help managers to better optimize the wiki systems and formulate incentive strategies for editors.


2013 ◽  
Vol 10 (83) ◽  
pp. 20130048 ◽  
Author(s):  
Ben D. Fulcher ◽  
Max A. Little ◽  
Nick S. Jones

The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.


1986 ◽  
Vol 2 (3) ◽  
pp. 331-349 ◽  
Author(s):  
John J. Beggs

This article proposes the use of spectral methods to pool cross-sectional replications (N) of time series data (T) for time series analysis. Spectral representations readily suggest a weighting scheme to pool the data. The asymptotically desirable properties of the resulting estimators seem to translate satisfactorily into samples as small as T = 25 with N = 5. Simulation results, Monte Carlo results, and an empirical example help confirm this finding. The article concludes that there are many empirical situations where spectral methods canbe used where they were previously eschewed.


Author(s):  
Pēteris Grabusts ◽  
Arkady Borisov

Clustering Methodology for Time Series MiningA time series is a sequence of real data, representing the measurements of a real variable at time intervals. Time series analysis is a sufficiently well-known task; however, in recent years research has been carried out with the purpose to try to use clustering for the intentions of time series analysis. The main motivation for representing a time series in the form of clusters is to better represent the main characteristics of the data. The central goal of the present research paper was to investigate clustering methodology for time series data mining, to explore the facilities of time series similarity measures and to use them in the analysis of time series clustering results. More complicated similarity measures include Longest Common Subsequence method (LCSS). In this paper, two tasks have been completed. The first task was to define time series similarity measures. It has been established that LCSS method gives better results in the detection of time series similarity than the Euclidean distance. The second task was to explore the facilities of the classical k-means clustering algorithm in time series clustering. As a result of the experiment a conclusion has been drawn that the results of time series clustering with the help of k-means algorithm correspond to the results obtained with LCSS method, thus the clustering results of the specific time series are adequate.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sungyul Chang ◽  
Unseok Lee ◽  
Min Jeong Hong ◽  
Yeong Deuk Jo ◽  
Jin-Baek Kim

Yield prediction for crops is essential information for food security. A high-throughput phenotyping platform (HTPP) generates the data of the complete life cycle of a plant. However, the data are rarely used for yield prediction because of the lack of quality image analysis methods, yield data associated with HTPP, and the time-series analysis method for yield prediction. To overcome limitations, this study employed multiple deep learning (DL) networks to extract high-quality HTTP data, establish an association between HTTP data and the yield performance of crops, and select essential time intervals using machine learning (ML). The images of Arabidopsis were taken 12 times under environmentally controlled HTPP over 23 days after sowing (DAS). First, the features from images were extracted using DL network U-Net with SE-ResXt101 encoder and divided into early (15–21 DAS) and late (∼21–23 DAS) pre-flowering developmental stages using the physiological characteristics of the Arabidopsis plant. Second, the late pre-flowering stage at 23 DAS can be predicted using the ML algorithm XGBoost, based only on a portion of the early pre-flowering stage (17–21 DAS). This was confirmed using an additional biological experiment (P < 0.01). Finally, the projected area (PA) was estimated into fresh weight (FW), and the correlation coefficient between FW and predicted FW was calculated as 0.85. This was the first study that analyzed time-series data to predict the FW of related but different developmental stages and predict the PA. The results of this study were informative and enabled the understanding of the FW of Arabidopsis or yield of leafy plants and total biomass consumed in vertical farming. Moreover, this study highlighted the reduction of time-series data for examining interesting traits and future application of time-series analysis in various HTPPs.


Sign in / Sign up

Export Citation Format

Share Document