scholarly journals Self-similarity principle: the reduced description of randomness

Open Physics ◽  
2013 ◽  
Vol 11 (6) ◽  
Author(s):  
Raoul Nigmatullin ◽  
José Machado ◽  
Rui Menezes

AbstractA new general fitting method based on the Self-Similar (SS) organization of random sequences is presented. The proposed analytical function helps to fit the response of many complex systems when their recorded data form a self-similar curve. The verified SS principle opens new possibilities for the fitting of economical, meteorological and other complex data when the mathematical model is absent but the reduced description in terms of some universal set of the fitting parameters is necessary. This fitting function is verified on economical (price of a commodity versus time) and weather (the Earth’s mean temperature surface data versus time) and for these nontrivial cases it becomes possible to receive a very good fit of initial data set. The general conditions of application of this fitting method describing the response of many complex systems and the forecast possibilities are discussed.

2020 ◽  
Vol 10 (4) ◽  
pp. 697-721
Author(s):  
D. Reid Evans

Fundamental to complex dynamic systems theory is the assumption that the recursive behavior of complex systems results in the generation of physical forms and dynamic processes that are self-similar and scale-invariant. Such fractal-like structures and the organismic benefit that they engender has been widely noted in physiology, biology, and medicine, yet discussions of the fractal-like nature of language have remained at the level of metaphor in applied linguistics. Motivated by the lack of empirical evidence supporting this assumption, the present study examines the extent to which the use and development of complex syntax in a learner of English as a second language demonstrate the characteristics of self-similarity and scale invariance at nested timescales. Findings suggest that the use and development of syntactic complexity are governed by fractal scaling as the dynamic relationship among the subconstructs of syntax maintain their complexity and variability across multiple temporal scales. Overall, fractal analysis appears to be a fruitful analytic tool when attempting to discern the dynamic relationships among the multiple component parts of complex systems as they interact over time.


Animals ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 1412
Author(s):  
André Mensching ◽  
Marleen Zschiesche ◽  
Jürgen Hummel ◽  
Armin Otto Schmitt ◽  
Clément Grelet ◽  
...  

The aim of this work was to develop an innovative multivariate plausibility assessment (MPA) algorithm in order to differentiate between ‘physiologically normal’, ‘physiologically extreme’ and ‘implausible’ observations in simultaneously recorded data. The underlying concept is based on the fact that different measurable parameters are often physiologically linked. If physiologically extreme observations occur due to disease, incident or hormonal cycles, usually more than one measurable trait is affected. In contrast, extreme values of a single trait are most likely implausible if all other traits show values in a normal range. For demonstration purposes, the MPA was applied on a time series data set which was collected on 100 cows in 10 commercial dairy farms. Continuous measurements comprised climate data, intra-reticular pH and temperature, jaw movement and locomotion behavior. Non-continuous measurements included milk yield, milk components, milk mid-infrared spectra and blood parameters. After the application of the MPA, in particular the pH data showed the most implausible observations with approximately 5% of the measured values. The other traits showed implausible values up to 2.5%. The MPA showed the ability to improve the data quality for downstream analyses by detecting implausible observations and to discover physiologically extreme conditions even within complex data structures. At this stage, the MPA is not a fully developed and validated management tool, but rather corresponds to a basic concept for future works, which can be extended and modified as required.


1970 ◽  
Vol 38 ◽  
pp. 32-37 ◽  
Author(s):  
MMA Sarker

Long memory processes, where positive correlations between observations far apart in time and space decay very slowly to zero with increasing time lag, occur quite frequently in fields such as hydrology and economics. Stochastic processes that are invariant in distribution under judicious scaling of time and space, called self-similar process, can parsimoniously model the long-run properties of phenomena exhibiting long-range dependence. Four of the heuristic estimation approaches have been presented in this study so that the self-similarity parameter, H that gives the correlation structure in long memory processes, can be effectively estimated. Finally, the methods presented in this paper were applied to two observed time series, namely Nile River Data set and the VBR (Variable- Bit-Rate) data set. The estimated values of H for two data sets found from different methods suggest that all methods are not equally good for estimation. Keywords: Long memory process, long-range dependence, Self-similar process, Hurst Parameter, Gaussian noise. DOI: 10.3329/jme.v38i0.898 Journal of Mechanical Engineering Vol.38 Dec. 2007 pp.32-37  


2012 ◽  
Vol 23 (2) ◽  
pp. 55-74
Author(s):  
Ali M. A. Rushdi Ali M. A. Rushdi

This paper reviews earlier work claiming that water-resource inputs closely follow a self-similar model, which is a manifestation of the more general multi-fractal behavior. Such claims have been repeatedly verified for wet terrains, but are still to be tested for arid regions. The paper presents a tutorial introduction to the concepts and phenomena of self similarity and multi-fractality for water-resource inputs. It also discusses the important implications of these phenomena for water consumption and storage. While conservative water consumption is an all-time virtue, it must be stressed and strictly imposed during periods of water shortages or droughts, which are more likely to be extended than to be brief. Periods of heavy rainfalls and floods should be optimally utilized to improve existing water storage levels. A mandatory policy for water conservation is strongly advocated by this paper, which contributes a scientific confirmation for the intuitively observed self-similarity phenomenon in arid regions. The paper presents a pilot and pioneering statistical study of a rainfall data set in a desert arid region. This study clearly indicates that this data set has a multi-fractal behavior, and that a mono-fractal model does not suit it. To the best of our knowledge, this result is the first published decisive confirmation for the self-similarity and multifractality phenomenon for rainfall data in arid regions.


Author(s):  
И.В. КОТЕНКО ◽  
А.М. КРИБЕЛЬ ◽  
О.С. ЛАУТА ◽  
И.Б. САЕНКО

Предложен подход кобнаружению кибератак на компьютерные сети, основанный на выявлениианомалий в сетевом трафике путем оценки свойства самоподобия. Рассмотрены методы выявления долговременной зависимости в фрактальном броуновском движении и реальном сетевом трафике компьютерных сетей. Показано, что трафик телекоммуникационной сети является самоподобной структурой и его поведение близко к фрактальному броуновскому движению. В качестве инструментов при разработке данного подхода были использованы фрактальный анализ и математическая статистика. Анализируются вопросы программной реализации предлагаемого подхода и формирования набора данных, содержащего сетевые пакеты компьютерных сетей. Экспериментальные результаты, полученные с использованием сгенерированного набораданных, продемонстрировали наличие самоподобия у сетевого трафика компьютерных сетей и подтвердили высокую эффективность предлагаемого подхода: он позволяет обнаруживать кибератаки в реальном или близком к реальному масштабе времени. The paper discusses an approach to detecting cyber attacks on computer networks, based on identifying anomalies in network traffic by assessing its self-similarity property. Methods for identifying long-term dependence in fractal Brownian motion and real network traffic of computer networks are considered. It is shown that the traffic of a telecommunication network is a self-similar structure and its behavior is close to fractal Brownian motion. Fractal analysis and mathematical statistics were used as tools in the development of this approach. The issues of the software implementation of the proposed approach and the formation of a data set containing network packets of computer networks are considered. The experimental results obtained using the generated dataset demonstrated the existence of selfsimilarity in the network traffic of computer networks and confirmed the fair efficiency of the proposed approach. The proposed can be used to quickly detect cyber attacks in real or near real time.


2018 ◽  
Vol 1 (1) ◽  
Author(s):  
Timon Cheng-Yi Liu ◽  
Quan-Guang Zhang ◽  
Chong-Yun Wu ◽  
Luo-Dan Yang ◽  
Ling Zhu ◽  
...  

Objective  In MSSE, we have divided male 2.5-month-old Sprague-Dawley rats into the following 4 groups: control (C), habitual swimming (SW), Alzheimer’s disease (AD) induction without swimming (AD), and habitual swimming and then AD induction (SA), and found the perfect resistance of habitual swimming to AD induction by using the P value statistics of the 5 behavior parameters of rats and the 23 physiological and biochemical parameters of their hippocampus. The topological difference  of four groups were further calculated in this paper by using quantitative difference (QD) and self-similar approach. Methods 1. The logarithm to base golden section τ (lt) is called golden logarithm. It was found that σ=ltσ ≈ 0.710439287156503. 2. For a process from x1 to x2, lx(1,2)=lt(x2/x1) and its absolute vale are called the process logarithm and its QD, QDx(1,2). There are QD threshold values (αx,βx,γx) of function x which can be calculated in terms of σ. The function x is kept to be constant if QDx(1,2) < αx. A function in/far from its function-specific homeostasis is called a normal/dysfunctional function. A normal function can resist a disturbance under its threshold so that QDx(1,2) < βx. A dysfunctional function is defined as the QD is significant if βx ≦QDx(1,2) < γx and extraordinarily significant if QDx(1,2) ≧ γx. 3. Self-similarity was studied in the fractal literature: a pattern is self-similar if it does not vary with spatial or temporal scale. First-order self-similarity condition leads to the power law between two data sets A = {xi} and B = {yi}; yi = ai xi if the QDi of ai and the average of {ai} is smaller than βmin=min{βi} and the average QD of {QDi} is smaller than αmin=min{αi}. 4. The σ algorithm for integrative biology was established based on high-order self-similarity. Those parameters that contribute to the topological difference were the biomarkers. Results The 28 dimension data set consisted of all the 28 parameters. The first-order self-similarity held true for the 28 dimension data sets between groups C and SW. The topological algorithm of other groups suggested three AD biomarkers, protein carbonyl, granules density of presynaptic synaptophysin in the hippocampal CA1 and malondialdehyde intensity. The first two biomarkers were completely reversed by exercise pretreatment, but the third biomarker was partially reversed. Conclusions  Exercise pretraining exerts partial benefits on AD that support its use as a promising new therapeutic option for prevention of neurodegeneration in the elderly and/or AD population. 


2020 ◽  
Vol 501 (2) ◽  
pp. 1663-1676
Author(s):  
R Barnett ◽  
S J Warren ◽  
N J G Cross ◽  
D J Mortlock ◽  
X Fan ◽  
...  

ABSTRACT We present the results of a new, deeper, and complete search for high-redshift 6.5 &lt; z &lt; 9.3 quasars over 977 deg2 of the VISTA Kilo-Degree Infrared Galaxy (VIKING) survey. This exploits a new list-driven data set providing photometry in all bands Z, Y, J, H, Ks, for all sources detected by VIKING in J. We use the Bayesian model comparison (BMC) selection method of Mortlock et al., producing a ranked list of just 21 candidates. The sources ranked 1, 2, 3, and 5 are the four known z &gt; 6.5 quasars in this field. Additional observations of the other 17 candidates, primarily DESI Legacy Survey photometry and ESO FORS2 spectroscopy, confirm that none is a quasar. This is the first complete sample from the VIKING survey, and we provide the computed selection function. We include a detailed comparison of the BMC method against two other selection methods: colour cuts and minimum-χ2 SED fitting. We find that: (i) BMC produces eight times fewer false positives than colour cuts, while also reaching 0.3 mag deeper, (ii) the minimum-χ2 SED-fitting method is extremely efficient but reaches 0.7 mag less deep than the BMC method, and selects only one of the four known quasars. We show that BMC candidates, rejected because their photometric SEDs have high χ2 values, include bright examples of galaxies with very strong [O iii] λλ4959,5007 emission in the Y band, identified in fainter surveys by Matsuoka et al. This is a potential contaminant population in Euclid searches for faint z &gt; 7 quasars, not previously accounted for, and that requires better characterization.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1054
Author(s):  
Rozaimi Zakaria ◽  
Abd. Fatah Wahab ◽  
Isfarita Ismail ◽  
Mohammad Izat Emir Zulkifly

This paper discusses the construction of a type-2 fuzzy B-spline model to model complex uncertainty of surface data. To construct this model, the type-2 fuzzy set theory, which includes type-2 fuzzy number concepts and type-2 fuzzy relation, is used to define the complex uncertainty of surface data in type-2 fuzzy data/control points. These type-2 fuzzy data/control points are blended with the B-spline surface function to produce the proposed model, which can be visualized and analyzed further. Various processes, namely fuzzification, type-reduction and defuzzification are defined to achieve a crisp, type-2 fuzzy B-spline surface, representing uncertainty complex surface data. This paper ends with a numerical example of terrain modeling, which shows the effectiveness of handling the uncertainty complex data.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S79-S80
Author(s):  
Joanne Huang ◽  
Zahra Kassamali Escobar ◽  
Rupali Jain ◽  
Jeannie D Chan ◽  
John B Lynch ◽  
...  

Abstract Background In an effort to support stewardship endeavors, the MITIGATE (a Multifaceted Intervention to Improve Prescribing for Acute Respiratory Infection for Adult and Children in Emergency Department and Urgent Care Settings) Toolkit was published in 2018, aiming to reduce unnecessary antibiotics for viral respiratory tract infections (RTIs). At the University of Washington, we have incorporated strategies from this toolkit at our urgent care clinics. This study aims to address solutions to some of the challenges we experienced. Challenges and Solutions Methods This was a retrospective observational study conducted at Valley Medical Center (Sept 2019-Mar 2020) and the University of Washington (Jan 2019-Feb 2020) urgent care clinics. Patients were identified through ICD-10 diagnosis codes included in the MITIGATE toolkit. The primary outcome was identifying challenges and solutions developed during this process. Results We encountered five challenges during our roll-out of MITIGATE. First, using both ICD-9 and ICD-10 codes can lead to inaccurate data collection. Second, technical support for coding a complex data set is essential and should be accounted for prior to beginning stewardship interventions of this scale. Third, unintentional incorrect diagnosis selection was common and may require reeducation of prescribers on proper selection. Fourth, focusing on singular issues rather than multiple outcomes is more feasible and can offer several opportunities for stewardship interventions. Lastly, changing prescribing behavior can cause unintended tension during implementation. Modifying benchmarks measured, allowing for bi-directional feedback, and identifying provider champions can help maintain open communication. Conclusion Resources such as the MITIGATE toolkit are helpful to implement standardized data driven stewardship interventions. We have experienced some challenges including a complex data build, errors with diagnostic coding, providing constructive feedback while maintaining positive stewardship relationships, and choosing feasible outcomes to measure. We present solutions to these challenges with the aim to provide guidance to those who are considering using this toolkit for outpatient stewardship interventions. Disclosures All Authors: No reported disclosures


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Sven Lißner ◽  
Stefan Huber

Abstract Background GPS-based cycling data are increasingly available for traffic planning these days. However, the recorded data often contain more information than simply bicycle trips. GPS tracks resulting from tracking while using other modes of transport than bike or long periods at working locations while people are still tracking are only some examples. Thus, collected bicycle GPS data need to be processed adequately to use them for transportation planning. Results The article presents a multi-level approach towards bicycle-specific data processing. The data processing model contains different steps of processing (data filtering, smoothing, trip segmentation, transport mode recognition, driving mode detection) to finally obtain a correct data set that contains bicycle trips, only. The validation reveals a sound accuracy of the model at its’ current state (82–88%).


Sign in / Sign up

Export Citation Format

Share Document