scholarly journals Method for calculating loads combination on a building using information measures

2018 ◽  
Vol 212 ◽  
pp. 01033
Author(s):  
Elena Chernetsova ◽  
Anatoly Shishkin

A method for calculating loads combination on a building is considered using information measures of the connectivity of signals received from sensors of various physical nature, united in a wireless monitoring network. The method includes the definition of the most powerful information measure on the ensemble of process realizations with known a priori load data by the criterion of connectedness of time series. Then, based on the selected information measure, the connectivity of the signals for the ensemble of realizations of the random process of loads to the building from the network formed by the wireless monitoring data bank of time series is calculated. The volume of the data bank sufficient to make the correct decision about the combination of loads on the building with a predetermined error probability is calculated on the basis of a consistent criterion for the ratio of Wald probabilities. This method is easily algorithmized and can be used to develop an automated decision support system.

Entropy ◽  
2018 ◽  
Vol 20 (1) ◽  
pp. 61 ◽  
Author(s):  
George Manis ◽  
Md Aktaruzzaman ◽  
Roberto Sassi

Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.


2020 ◽  
Vol 2 (1) ◽  
pp. 59-81
Author(s):  
D. A. Lovtsov ◽  

Introduction. The lack of a coherent systemology law does not enable the use of evidence-based formalization to solve the basic theoretical problems of law interpretation and enforcement. The development of an appropriate formal-theoretical apparatus is possible on the basis of a productive systemological concept. The justification of this concept is based on the study of philosophical bases and fundamental principles (integrity, dynamic equilibrium, feedback, etc.) and the use of logical and linguistic methods of problem-oriented system approach. Theoretical Basis. Methods. The conceptual and logical modeling of legal ergasystems, the systems analysis and resolution of the theory-applied base of technology of two-tier legal regulation; the synthesis and modification of private scientific results of the author published in 2000–2019, with copyright in the author’s scientific works and educational publications. Results. The contemporary conceptual variant of combined “ICS”-approach (“information, cybernetic and synergetic”) as a general methodology of analysis and optimization of legal ergasystems, as characterized by the following conditions: the substantiation of the appropriate three-part set of methodological research principles, corresponding to the triple-aspect physical nature of the study of complex legal systems as ergasystems; the clarification of the conceptual and logical model of the legal ergasystem taking into account the fundamental feedback principle; the definition of the law of necessary diversity of William R. Ashby is justified and corresponding conditions of realize of effective technology of two-level (normative and individual) legal regulation; the definition of basic concepts and methodological principles of modern systemology of legal regulation; the justification of the functional organization of the Invariant Rational Control Loop. Discussion and Conclusion. A developed conceptual object-oriented version of combined “ICS”-approach for analysis and optimization of legal ergasystems is a methodological basis for the development of a working formal-theoretical apparatus of legal regulation systemology. This will formalize the decisions of the main theoretical problems of law interpretation and enforcement, as well as developing and implementing special information and legal technologies based on the concept of information and functional databases and knowledge. This will in turn ensure the information increases the effectiveness of the system of legal regulation of public relations as an information and cybernetic system subject to the subjective organizing process of human activity and the objective synergetic processes of disorganization.


Author(s):  
Chiara Treghini ◽  
Alfonso Dell’Accio ◽  
Franco Fusi ◽  
Giovanni Romano

AbstractChronic lung infections are among the most diffused human infections, being often associated with multidrug-resistant bacteria. In this framework, the European project “Light4Lungs” aims at synthesizing and testing an inhalable light source to control lung infections by antimicrobial photoinactivation (aPDI), addressing endogenous photosensitizers only (porphyrins) in the representative case of S. aureus and P. aeruginosa. In the search for the best emission characteristics for the aerosolized light source, this work defines and calculates the photo-killing action spectrum for lung aPDI in the exemplary case of cystic fibrosis. This was obtained by applying a semi-theoretical modelling with Monte Carlo simulations, according to previously published methodology related to stomach infections and applied to the infected trachea, bronchi, bronchioles and alveoli. In each of these regions, the two low and high oxygen concentration cases were considered to account for the variability of in vivo conditions, together with the presence of endogenous porphyrins and other relevant absorbers/diffusers inside the illuminated biofilm/mucous layer. Furthermore, an a priori method to obtain the “best illumination wavelengths” was defined, starting from maximizing porphyrin and light absorption at any depth. The obtained action spectrum is peaked at 394 nm and mostly follows porphyrin extinction coefficient behavior. This is confirmed by the results from the best illumination wavelengths, which reinforces the robustness of our approach. These results can offer important indications for the synthesis of the aerosolized light source and definition of its most effective emission spectrum, suggesting a flexible platform to be considered in further applications.


2021 ◽  
Vol 13 (10) ◽  
pp. 2006
Author(s):  
Jun Hu ◽  
Qiaoqiao Ge ◽  
Jihong Liu ◽  
Wenyan Yang ◽  
Zhigui Du ◽  
...  

The Interferometric Synthetic Aperture Radar (InSAR) technique has been widely used to obtain the ground surface deformation of geohazards (e.g., mining subsidence and landslides). As one of the inherent errors in the interferometric phase, the digital elevation model (DEM) error is usually estimated with the help of an a priori deformation model. However, it is difficult to determine an a priori deformation model that can fit the deformation time series well, leading to possible bias in the estimation of DEM error and the deformation time series. In this paper, we propose a method that can construct an adaptive deformation model, based on a set of predefined functions and the hypothesis testing theory in the framework of the small baseline subset InSAR (SBAS-InSAR) method. Since it is difficult to fit the deformation time series over a long time span by using only one function, the phase time series is first divided into several groups with overlapping regions. In each group, the hypothesis testing theory is employed to adaptively select the optimal deformation model from the predefined functions. The parameters of adaptive deformation models and the DEM error can be modeled with the phase time series and solved by a least square method. Simulations and real data experiments in the Pingchuan mining area, Gaunsu Province, China, demonstrate that, compared to the state-of-the-art deformation modeling strategy (e.g., the linear deformation model and the function group deformation model), the proposed method can significantly improve the accuracy of DEM error estimation and can benefit the estimation of deformation time series.


1944 ◽  
Vol 41 (6) ◽  
pp. 155
Author(s):  
Arthur Child
Keyword(s):  
A Priori ◽  

2009 ◽  
Vol 19 (02) ◽  
pp. 453-485 ◽  
Author(s):  
MINGHAO YANG ◽  
ZHIQIANG LIU ◽  
LI LI ◽  
YULIN XU ◽  
HONGJV LIU ◽  
...  

Some chaotic and a series of stochastic neural firings are multimodal. Stochastic multimodal firing patterns are of special importance because they indicate a possible utility of noise. A number of previous studies confused the dynamics of chaotic and stochastic multimodal firing patterns. The confusion resulted partly from inappropriate interpretations of estimations of nonlinear time series measures. With deliberately chosen examples the present paper introduces strategies and methods of identification of stochastic firing patterns from chaotic ones. Aided by theoretical simulation we show that the stochastic multimodal firing patterns result from the effects of noise on neuronal systems near to a bifurcation between two simpler attractors, such as a point attractor and a limit cycle attractor or two limit cycle attractors. In contrast, the multimodal chaotic firing trains are generated by the dynamics of a specific strange attractor. Three systems were carefully chosen to elucidate these two mechanisms. An experimental neural pacemaker model and the Chay mathematical model were used to show the stochastic dynamics, while the deterministic Wang model was used to show the deterministic dynamics. The usage and interpretation of nonlinear time series measures were systematically tested by applying them to firing trains generated by the three systems. We successfully identified the distinct differences between stochastic and chaotic multimodal firing patterns and showed the dynamics underlying two categories of stochastic firing patterns. The first category results from the effects of noise on the neuronal system near a Hopf bifurcation. The second category results from the effects of noise on the period-adding bifurcation between two limit cycles. Although direct application of nonlinear measures to interspike interval series of these firing trains misleadingly implies chaotic properties, definition of eigen events based on more appropriate judgments of the underlying dynamics leads to accurate identifications of the stochastic properties.


2014 ◽  
Vol 23 (2) ◽  
pp. 213-229 ◽  
Author(s):  
Cangqi Zhou ◽  
Qianchuan Zhao

AbstractMining time series data is of great significance in various areas. To efficiently find representative patterns in these data, this article focuses on the definition of a valid dissimilarity measure and the acceleration of partitioning clustering, a common group of techniques used to discover typical shapes of time series. Dissimilarity measure is a crucial component in clustering. It is required, by some particular applications, to be invariant to specific transformations. The rationale for using the angle between two time series to define a dissimilarity is analyzed. Moreover, our proposed measure satisfies the triangle inequality with specific restrictions. This property can be employed to accelerate clustering. An integrated algorithm is proposed. The experiments show that angle-based dissimilarity captures the essence of time series patterns that are invariant to amplitude scaling. In addition, the accelerated algorithm outperforms the standard one as redundancies are pruned. Our approach has been applied to discover typical patterns of information diffusion in an online social network. Analyses revealed the formation mechanisms of different patterns.


Author(s):  
D. Egorov

Adam Smith defined economics as “the science of the nature and causes of the wealth of nations” (implicitly appealing – in reference to the “wealth” – to the “value”). Neo-classical theory views it as a science “which studies human behavior in terms of the relationship between the objectives and the limited funds that may have a different use of”. The main reason that turns the neo-classical theory (that serves as the now prevailing economic mainstream) into a tool for manipulation of the public consciousness is the lack of measure (elimination of the “value”). Even though the neo-classical definition of the subject of economics does not contain an explicit rejection of objective measures the reference to “human behavior” inevitably implies methodological subjectivism. This makes it necessary to adopt a principle of equilibrium: if you can not objectively (using a solid measurement) compare different states of the system, we can only postulate the existence of an equilibrium point to which the system tends. Neo-classical postulate of equilibrium can not explain the situation non-equilibrium. As a result, the neo-classical theory fails in matching microeconomics to macroeconomics. Moreover, a denial of the category “value” serves as a theoretical basis and an ideological prerequisite of now flourishing manipulative financial technologies. The author believes in the following two principal definitions: (1) economics is a science that studies the economic system, i.e. a system that creates and recombines value; (2) value is a measure of cost of the object. In our opinion, the value is the information cost measure. It should be added that a disclosure of the nature of this category is not an obligatory prerequisite of its introduction: methodologically, it is quite correct to postulate it a priori. The author concludes that the proposed definitions open the way not only to solve the problem of the measurement in economics, but also to address the issue of harmonizing macro- and microeconomics.


Author(s):  
DAVID GARCIA ◽  
ANTONIO GONZALEZ ◽  
RAUL PEREZ

In system identification process often a predetermined set of features is used. However, in many cases it is difficult to know a priori whether the selected features were really the more appropriate ones. This is the reason why the feature construction techniques have been very interesting in many applications. Thus, the current proposal introduces the use of these techniques in order to improve the description of fuzzy rule-based systems. In particular, the idea is to include feature construction in a genetic learning algorithm. The construction of attributes in this study will be restricted to the inclusion of functions defined on the initial attributes of the system. Since the number of functions and the number of attributes can be very large, a filter model, based on the use of information measures, is introduced. In this way, the genetic algorithm only needs to explore the particular new features that may be of greater interest to the final identification of the system. In order to manage the knowledge provided by the new attributes based on the use of functions we propose a new model of rule by extending a basic learning fuzzy rule-based model. Finally, we show the experimental study associated with this work.


Sign in / Sign up

Export Citation Format

Share Document