dependent data
Recently Published Documents


TOTAL DOCUMENTS

695
(FIVE YEARS 117)

H-INDEX

42
(FIVE YEARS 4)

MAUSAM ◽  
2022 ◽  
Vol 53 (2) ◽  
pp. 165-176
Author(s):  
R. P. KANE

The time series of SOI (Southern Oscillation Index, Tahiti minus Darwin sea-level atmospheric pressure difference) was spectrally analysed by a simple method MEM-MRA, where periodicities are detected by MEM (Maximum Entropy Method) and used in MRA (Multiple Regression Analysis) to get the estimates of their amplitudes and phases. From these, the three or four most prominent ones were used for reconstruction and prediction. Using data for 1935-80 as dependent data, the reconstructed values of SOI matched well with observed values and most of the El Niños (SOI minima) and La Niñas (SOI maxima) were located correctly. But for the independent data (1980 onwards), the matching was poor. Omitting earlier data, 1945- 80, 1955-80, 1965-80 as dependent data again gave poor matching for 1980 onwards. When data for 1980 onwards only were used as dependent data, the matching was better, indicating that the spectral characteristics have changed considerably with time and recent data were more appropriate for further predictions. The 1997 El Niño was reproduced only in data for 1985 onwards. For 1990 onwards, only a single wave of 3.5 years was appropriate and explained the 1997 and 1994 events but only one (1991) of the 3 complex and quick events that occurred during 1989-95. The UCLA group of Dr. Ghil has been using the SSA (Singular Spectrum Analysis)-MEM combination for SOI analysis. For the 1980s, they got very good matching, but the 1989-95 structures were not reproduced. For recent years, their SSA-filtered SOI (used for prediction) is a simple sinusoid of ~3.5 years. It predicted the El Niño of 1997 only at its peak and even after using data up to February 1997, the abrupt commencement of the event in March 1997 and its abrupt end in June 1998 could not be predicted.   Using only a 3.5 years wave, an El Niño was expected for 2000-2001. However, a very long-lasting La Niña seems to be operative and there are no indications as yet (September of 2001) of any El Niño like conditions.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 9
Author(s):  
Muhammed Rasheed Irshad ◽  
Radhakumari Maya ◽  
Francesco Buono ◽  
Maria Longobardi

Tsallis introduced a non-logarithmic generalization of Shannon entropy, namely Tsallis entropy, which is non-extensive. Sati and Gupta proposed cumulative residual information based on this non-extensive entropy measure, namely cumulative residual Tsallis entropy (CRTE), and its dynamic version, namely dynamic cumulative residual Tsallis entropy (DCRTE). In the present paper, we propose non-parametric kernel type estimators for CRTE and DCRTE where the considered observations exhibit an ρ-mixing dependence condition. Asymptotic properties of the estimators were established under suitable regularity conditions. A numerical evaluation of the proposed estimator is exhibited and a Monte Carlo simulation study was carried out.


2021 ◽  
Vol 49 (6) ◽  
Author(s):  
Marie du Roy de Chaumaray ◽  
Matthieu Marbac ◽  
Valentin Patilea

Author(s):  
Arvind Singh ◽  
Surya Prakash Pandey

Financial institutions face many challenges of managing and marketing campaigns which leads in its data warehouse. The management of marketing campaign leads in dependent data mart with real time updating and recording difficulties especially when many campaigns are running parallel ways. To securing the customers from being contacted too often for sales-based marketing contacts, the concept of novelty skeleton are introduced to clamp the customers who have been targeted in Sales based campaign for a specified time period. During the novelty Frame, the customer cannot be targeted by other Sales based campaign categorized under the same channel. The introduction of novelty skeleton has increased the difficulties of campaign management and data management. The difficulties of data management include timely update and robust storage system of campaign leads. In this paper, we explained represent the concept of slowly changing dimension on dependent data mart and also studied how it can be used in the data mart of financial institutions to update and maintain marketing campaign records of customers.


2021 ◽  
pp. 1-26
Author(s):  
Ulrich Hounyo

This paper introduces a novel wild bootstrap for dependent data (WBDD) as a means of calculating standard errors of estimators and constructing confidence regions for parameters based on dependent heterogeneous data. The consistency of the bootstrap variance estimator for smooth function of the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first-order asymptotic validity of the WBDD in distribution approximation is established when data are assumed to satisfy a near epoch dependent condition and under the framework of the smooth function model. The WBDD offers a viable alternative to the existing non parametric bootstrap methods for dependent data. It preserves the second-order correctness property of blockwise bootstrap (provided we choose the external random variables appropriately), for stationary time series and smooth functions of the mean. This desirable property of any bootstrap method is not known for extant wild-based bootstrap methods for dependent data. Simulation studies illustrate the finite-sample performance of the WBDD.


2021 ◽  
Vol 869 (1) ◽  
pp. 012020
Author(s):  
S A Raup ◽  
S Patmiarsih ◽  
R D Juniar ◽  
B Setyadji

Abstract Tuna and tuna-like fisheries play a vital role in Indonesian livelihood, especially in the archipelagic waters. However, despite the importance, the concern in general data collection activities for tuna, i.e., limited, with incomplete scientific knowledge and insufficient data has hampered the assessment. The purpose of this study was to analyse on how fisheries-dependent data system could transform the data quality. E-logbook has the best attribute for reaching the goals, especially for small-scale tuna fisheries. Characterised by low cost and vast spatial and temporal coverage, it is convinced on why the program should be expanded and monitored carefully. Analysis on fisheries indicators showed a promising result, especially for filling the gap which could not be covered by research.


Information ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 439
Author(s):  
J. Kasmire ◽  
Anran Zhao

Machine learning (ML) is increasingly useful as data grow in volume and accessibility. ML can perform tasks (e.g., categorisation, decision making, anomaly detection, etc.) through experience and without explicit instruction, even when the data are too vast, complex, highly variable, full of errors to be analysed in other ways. Thus, ML is great for natural language, images, or other complex and messy data available in large and growing volumes. Selecting ML models for tasks depends on many factors as they vary in supervision needed, tolerable error levels, and ability to account for order or temporal context, among many other things. Importantly, ML methods for tasks that use explicitly ordered or time-dependent data struggle with errors or data asymmetry. Most data are (implicitly) ordered or time-dependent, potentially allowing a hidden `arrow of time’ to affect ML performance on non-temporal tasks. This research explores the interaction of ML and implicit order using two ML models to automatically classify (a non-temporal task) tweets (temporal data) under conditions that balance volume and complexity of data. Results show that performance was affected, suggesting that researchers should carefully consider time when matching appropriate ML models to tasks, even when time is only implicitly included.


Sign in / Sign up

Export Citation Format

Share Document