analysis of time series
Recently Published Documents


TOTAL DOCUMENTS

732
(FIVE YEARS 135)

H-INDEX

47
(FIVE YEARS 3)

Author(s):  
Roland Barthel ◽  
Ezra Haaf ◽  
Michelle Nygren ◽  
Markus Giese

AbstractVisual analysis of time series in hydrology is frequently seen as a crucial step to becoming acquainted with the nature of the data, as well as detecting unexpected errors, biases, etc. Human eyes, in particular those of a trained expert, are well suited to recognize irregularities and distinct patterns. However, there are limits as to what the eye can resolve and process; moreover, visual analysis is by definition subjective and has low reproducibility. Visual inspection is frequently mentioned in publications, but rarely described in detail, even though it may have significantly affected decisions made in the process of performing the underlying study. This paper presents a visual analysis of groundwater hydrographs that has been performed in relation to attempts to classify groundwater time series as part of developing a new concept for prediction in data-scarce groundwater systems. Within this concept, determining the similarity of groundwater hydrographs is essential. As standard approaches for similarity analysis of groundwater hydrographs do not yet exist, different approaches were developed and tested. This provided the opportunity to carry out a comparison between visual analysis and formal, automated classification approaches. The presented visual classification was carried out on two sets of time series from central Europe and Fennoscandia. It is explained why and where visual classification can be beneficial but also where the limitations and challenges associated with the approach lie. It is concluded that systematic visual analysis of time series in hydrology, despite its subjectivity and low reproducibility, should receive much more attention.


2022 ◽  
Vol 132 ◽  
pp. 01010
Author(s):  
Eva Kalinová ◽  
Yaroslava Kostiuk ◽  
Denisa Michutová

Labour market or supply and demand for labour is determined by how individuals demand work (supply) and how the offer of jobs is from the side of companies (demand). This is a very important issue, as it is a part of the main factor markets. The data on labour market are used for analysing the movement of labour market. Thanks to this, it is possible to forecast its approximate development. The basic source are data from the database of the Ministry of Labour and Social Affairs of the Czech Republic. The objective of this paper is to analyse the development of supply and demand for labour in the years 2010-2020 and forecast its development until 2025. The analysis of time series is performed using the method of artificial neural networks, which enables the analysis of the development between 2010 and 2020 and forecasting the further development of supply and demand for labour until the year 2025. The research shows that the development until the year 2025 will not be very favourable. The demand will be much greater than the supply, which means there will be more vacancies than workers. To fill the vacancies and be able to further operate, companies will try to solve this situation by hiring workers from other countries. The results of the paper being submitted may serve for other labour market research.


2022 ◽  
Vol 132 ◽  
pp. 01012
Author(s):  
Jakub Horák ◽  
Dominik Kaisler

The paper deals with the the development of a specific company’s stock price time series. The aim of the paper is to use the time series method for a detailed analysis and evaluation of the development of Apple Inc. stock prices. Daily data from 2000 to 2020, daily data from the period of the economic crisis between 2007 and 2009 and daily data from the Covid-19 pandemic period from March 2020 to the end of the same year are used. The data, from the period of 2000 - 2020 show a gradual increase in Apple’s stock prices. The most common factor leading to the increase in stock prices is the launch of a new product or service on the global market. On the contrary, the reason for the decline in stock prices is customer dissatisfaction, the excess of demand over supply, or the political situation. The analysis of time series for the period of the economic crisis points to the fact that thanks to the development, innovation and constant introduction of new products into the market, the company was not significantly affected by the crisis and neither were stock prices. Naturally, there were some fluctuations in prices, but at the end of 2009, the company even reached the highest stock prices in its history to date. The analysis of time series during the global pandemic of Covid-19 shows a steady rise in stock prices. Currently, the company sells more and more products and introduces new services that help us work, study or entertain ourselves in these difficult times, in the safety of our homes.


2021 ◽  
Author(s):  
Muhammad Haris Naveed ◽  
Umair Hashmi ◽  
Nayab Tajved ◽  
Neha Sultan ◽  
Ali Imran

This paper explores whether Generative Adversarial Networks (GANs) can produce realistic network load data that can be utilized to train machine learning models in lieu of real data. In this regard, we evaluate the performance of three recent GAN architectures on the Telecom Italia data set across a set of qualitative and quantitative metrics. Our results show that GAN generated synthetic data is indeed similar to real data and forecasting models trained on this data achieve similar performance to those trained on real data.


2021 ◽  
Author(s):  
Muhammad Haris Naveed ◽  
Umair Hashmi ◽  
Nayab Tajved ◽  
Neha Sultan ◽  
Ali Imran

This paper explores whether Generative Adversarial Networks (GANs) can produce realistic network load data that can be utilized to train machine learning models in lieu of real data. In this regard, we evaluate the performance of three recent GAN architectures on the Telecom Italia data set across a set of qualitative and quantitative metrics. Our results show that GAN generated synthetic data is indeed similar to real data and forecasting models trained on this data achieve similar performance to those trained on real data.


2021 ◽  
Author(s):  
François Ritter

Abstract. Errors, gaps and outliers complicate and sometimes invalidate the analysis of time series. While most fields have developed their own strategy to clean the raw data, no generic procedure has been promoted to standardize the pre-processing. This lack of harmonization makes the inter-comparison of studies difficult, and leads to screening methods that are usually ambiguous or case-specific. This study provides a generic pre-processing procedure (called past, implemented in R) dedicated to any univariate time series. Past is based on data binning and decomposes the time series into a long-term trend and a cyclic component (quantified by a new metric, the Stacked Cycles Index) to finally aggregate the data. Outliers are flagged with an enhanced Boxplot rule called Logbox. Three different Earth Science datasets (contaminated with gaps and outliers) are successfully cleaned and aggregated with past. This illustrates the robustness of this procedure that can be valuable to any discipline.


Author(s):  
Esma Mouine ◽  
Yan Liu ◽  
Jincheng Sun ◽  
Mathieu Nayrolles ◽  
Mahzad Kalantari

Author(s):  
Alessandro Longo ◽  
Stefano Bianchi ◽  
Guillermo Valdes ◽  
Nicolas Arnaud ◽  
Wolfango Plastino

Abstract Data acquired by the Virgo interferometer during the second part of the O3 scientific run, referred to as O3b, were analysed with the aim of characterising the onset and time evolution of scattered light noise in connection with the variability of microseismic noise in the environment surrounding the detector. The adaptive algorithm used, called pytvfemd, is suitable for the analysis of time series which are both nonlinear and nonstationary. It allowed to obtain the first oscillatory mode of the differential arm motion degree of freedom of the detector during days affected by scattered light noise. The mode’s envelope i.e., its instantaneous amplitude, is then correlated with the motion of the West end bench, a known source of scattered light during O3. The relative velocity between the West end test mass and the West end optical bench is used as a predictor of scattered light noise. Higher values of correlation are obtained in periods of higher seismic noise in the microseismic frequency band. This is also confirmed by the signal-to-noise ratio (SNR) of scattered light glitches from GravitySpy for the January-March 2020 period. Obtained results suggest that the adopted methodology is suited for scattered light noise characterisation and monitoring in gravitational wave interferometers.


2021 ◽  
Vol 2137 (1) ◽  
pp. 012063
Author(s):  
Liming Song ◽  
Zhimin Chen ◽  
XinXin Meng ◽  
Shuai Kang

Abstract This paper constructs an indicator system composed of inherent attributes and time characteristics of the line based on the line loss, and proposes a K-Means line loss cluster analysis model based on this indicator system. The line is classified according to the clustering results. The result is 314.51 on the CH index (Calinski Harabasz Index), 0.19 on the Silhouette Cofficient (Silhouette Cofficient), and a running time of 0.508s. Compared with the traditional algorithm, it is greatly improved. The field of line loss analysis has guiding significance.


Sign in / Sign up

Export Citation Format

Share Document