complexity measures
Recently Published Documents


TOTAL DOCUMENTS

694
(FIVE YEARS 207)

H-INDEX

41
(FIVE YEARS 4)

Author(s):  
Wolfgang Hornfeck

Abstract We present an illustrative analysis of the complexity of a crystal structure based on the application of Shannon’s entropy formula in the form of Krivovichev’s complexity measures and extended according to the contributions of distinct discrete probability distributions derived from the atomic numbers and the Wyckoff multiplicities and arities of the atoms and sites constituting the crystal structure, respectively. The results of a full crystallographic complexity partition analysis for the intermetallic phase Mo3Al2C, a compound of intermediate structural complexity, are presented, with all calculations performed in detail. In addition, a partial analysis is discussed for the crystal structures of α- and β-quartz.


Author(s):  
Slavcho Shtrakov

In this paper, we study two classes of complexity measures induced by new data structures (abstract reduction systems) for representing [Formula: see text]-valued functions (operations), namely subfunction and minor reductions. When assigning values to some variables in a function, the resulting functions are called subfunctions, and when identifying some variables, the resulting functions are called minors. The number of the distinct objects obtained under these reductions of a function [Formula: see text] is a well-defined measure of complexity denoted by [Formula: see text] and [Formula: see text], respectively. We examine the maximums of these complexities and construct functions which reach these upper bounds.


2022 ◽  
Vol 70 (1) ◽  
pp. 53-66
Author(s):  
Julian Grothoff ◽  
Nicolas Camargo Torres ◽  
Tobias Kleinert

Abstract Machine learning and particularly reinforcement learning methods may be applied to control tasks ranging from single control loops to the operation of whole production plants. However, their utilization in industrial contexts lacks understandability and requires suitable levels of operability and maintainability. In order to asses different application scenarios a simple measure for their complexity is proposed and evaluated on four examples in a simulated palette transport system of a cold rolling mill. The measure is based on the size of controller input and output space determined by different granularity levels in a hierarchical process control model. The impact of these decomposition strategies on system characteristics, especially operability and maintainability, are discussed, assuming solvability and a suitable quality of the reinforcement learning solution is provided.


2021 ◽  
Vol 12 (1) ◽  
pp. 197
Author(s):  
Chunxia Zhang ◽  
Xiaoli Wei ◽  
Sang-Woon Kim

This paper empirically evaluates two kinds of features, which are extracted, respectively, with traditional statistical methods and convolutional neural networks (CNNs), in order to improve the performance of seismic patch image classification. In the latter case, feature vectors, named “CNN-features”, were extracted from one trained CNN model, and were then used to learn existing classifiers, such as support vector machines. In this case, to learn the CNN model, a technique of transfer learning using synthetic seismic patch data in the source domain, and real-world patch data in the target domain, was applied. The experimental results show that CNN-features lead to some improvements in the classification performance. By analyzing the data complexity measures, the CNN-features are found to have the strongest discriminant capabilities. Furthermore, the transfer learning technique alleviates the problems of long processing times and the lack of learning data.


2021 ◽  
Vol 183 (45) ◽  
pp. 8-14
Author(s):  
Dilshan I. De Silva ◽  
Saluka R. Kodituwakku ◽  
Amalka J. Pinidiyaarachchi

Author(s):  
Jacopo Vanoli ◽  
Consuelo Rubina Nava ◽  
Chiara Airoldi ◽  
Andrealuna Ucciero ◽  
Virginio Salvi ◽  
...  

While state sequence analysis (SSA) has been long used in social sciences, its use in pharmacoepidemiology is still in its infancy. Indeed, this technique is relatively easy to use, and its intrinsic visual nature may help investigators to untangle the latent information within prescription data, facilitating the individuation of specific patterns and possible inappropriate use of medications. In this paper, we provide an educational primer of the most important learning concepts and methods of SSA, including measurement of dissimilarities between sequences, the application of clustering methods to identify sequence patterns, the use of complexity measures for sequence patterns, the graphical visualization of sequences, and the use of SSA in predictive models. As a worked example, we present an application of SSA to opioid prescription patterns in patients with non-cancer pain, using real-world data from Italy. We show how SSA allows the identification of patterns in prescriptions in these data that might not be evident using standard statistical approaches and how these patterns are associated with future discontinuation of opioid therapy.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shreeya Jugnandan ◽  
Gizelle D. Willows

Purpose The purpose of this paper is to investigate whether companies listed on the Johannesburg Stock Exchange use impression management techniques to obscure financial performance across the corporate reporting suite. Design/methodology/approach Mixed-effect linear regression models were used to examine whether there is a relationship between the financial performance of a company and the length or complexity of the reports produced. Findings Consistent with trends examined internationally, companies with lower financial performance tend to present lengthier disclosures throughout the reporting complement. However, there is limited evidence to suggest a definitive relationship between report complexity and performance. Corporate reports have maintained a consistent level of complexity and are not easily readable. Social implications This paper is unique as it simultaneously considers multiple corporate reports, including the annual financial statements, integrated reports and market announcements. The paper contributes to the limited body of literature on impression management from emerging economies. Originality/value A comparison of the complexity measures to the average education level of South Africans indicates that most corporate reports are not readable to the layman investor. Thus, despite there being no definitive relationship between complexity and performance, there is impetus to simplify corporate reporting.


Author(s):  
Othon Michail ◽  
George Skretas ◽  
Paul G. Spirakis

AbstractWe study here systems of distributed entities that can actively modify their communication network. This gives rise to distributed algorithms that apart from communication can also exploit network reconfiguration to carry out a given task. Also, the distributed task itself may now require a global reconfiguration from a given initial network $$G_s$$ G s to a target network $$G_f$$ G f from a desirable family of networks. To formally capture costs associated with creating and maintaining connections, we define three edge-complexity measures: the total edge activations, the maximum activated edges per round, and the maximum activated degree of a node. We give (poly)log(n) time algorithms for the task of transforming any $$G_s$$ G s into a $$G_f$$ G f of diameter (poly)log(n), while minimizing the edge-complexity. Our main lower bound shows that $$\varOmega (n)$$ Ω ( n ) total edge activations and $$\varOmega (n/\log n)$$ Ω ( n / log n ) activations per round must be paid by any algorithm (even centralized) that achieves an optimum of $$\varTheta (\log n)$$ Θ ( log n ) rounds. We give three distributed algorithms for our general task. The first runs in $$O(\log n)$$ O ( log n ) time, with at most 2n active edges per round, a total of $$O(n\log n)$$ O ( n log n ) edge activations, a maximum degree $$n-1$$ n - 1 , and a target network of diameter 2. The second achieves bounded degree by paying an additional logarithmic factor in time and in total edge activations. It gives a target network of diameter $$O(\log n)$$ O ( log n ) and uses O(n) active edges per round. Our third algorithm shows that if we slightly increase the maximum degree to polylog(n) then we can achieve $$o(\log ^2 n)$$ o ( log 2 n ) running time.


2021 ◽  
Author(s):  
Joaquin Gonzalez ◽  
Diego M. Mateos ◽  
Matias Cavelli ◽  
Alejandra Mondino ◽  
Claudia Pascovich ◽  
...  

Recently, the sleep-wake states have been analysed using novel complexity measures, complementing the classical analysis of EEGs by frequency bands. This new approach consistently shows a decrease in EEG's complexity during slow-wave sleep, yet it is unclear how cortical oscillations shape these complexity variations. In this work, we analyse how the frequency content of brain signals affects the complexity estimates in freely moving rats. We find that the low-frequency spectrum - including the Delta, Theta, and Sigma frequency bands - drives the complexity changes during the sleep-wake states. This happens because low-frequency oscillations emerge from neuronal population patterns, as we show by recovering the complexity variations during the sleep-wake cycle from micro, meso, and macroscopic recordings. Moreover, we find that the lower frequencies reveal synchronisation patterns across the neocortex, such as a sensory-motor decoupling that happens during REM sleep. Overall, our works shows that EEG's low frequencies are critical in shaping the sleep-wake states' complexity across cortical scales.


Sign in / Sign up

Export Citation Format

Share Document