Information and complexity measures in molecular reactivity studies

2014 ◽  
Vol 16 (28) ◽  
pp. 14928-14946 ◽  
Author(s):  
Meressa A. Welearegay ◽  
Robert Balawender ◽  
Andrzej Holas

The usefulness of the information and complexity measure in molecular reactivity studies.

2013 ◽  
Vol 411-414 ◽  
pp. 1994-1997
Author(s):  
Yan Li ◽  
Wen Ju Zhao ◽  
Zhen Hua Zhou

This paper defined the full connect map and contact surface, and proposed a new map complexity measure, and compared with measurement methods based on Hamming distance and relative Hamming distance. We further research on the relationship between the complexity measure and the map connectivity. The complexity measures based on Hamming distance and contact surface are applicable to full connectivity map, and the new measurement can reflects the difficulty of the pathfinding algorithm more accurately, especially in a higher complexity.


2020 ◽  
Author(s):  
Qiaona Yu

Abstract Language complexity reveals the ability to use a wide and varied range of sophisticated structures and vocabulary. Although different languages compose complexity differently, complexity measures such as the T-unit have typically been based on clause subordination, which may underrepresent complexity and threaten the validity of studies. This study argues that an organic complexity measure should avoid the assumption of clause subordination and instead consider the typological features of the target language. Therefore, this study proposes the TC-unit in recognition of the topic chain as the underlying unit of Chinese complexity. It further validates TC-unit-based measures by investigating how accurately they predict proficiency group membership. Discriminant analyses of L1 and L2 Chinese speakers’ spoken (N = 115) and written (N = 116) output elicited from a designed timed online test, revealed that TC-unit-based measures classified proficiency group membership with high efficiency (61.2–75.7 per cent). Mean length of terminable TC-unit proved the most effective indicator of spoken Chinese syntactic complexity, while mean length of terminable TC-unit and single TC-units per terminable TC-unit in combination proved the most effective for written Chinese syntactic complexity.


2007 ◽  
Vol 20 (3) ◽  
pp. 295-308
Author(s):  
Radomir Stankovic ◽  
Jaakko Astola

It has been recently shown in [1], that elementary mathematical functions (as trigonometric, logarithmic, square root, gaussian, sigmoid, etc) are compactly represented by the Arithmetic transform expressions and related Binary Moment Diagrams (BMDs). The complexity of the representations is estimated through the number of non-zero coefficients in arithmetic expressions and the number of nodes in BMDs. In this paper, we show that further optimization can be achieved when the method in [1] is combined with Fixed-polarity Arithmetic expressions (FPRAs). In addition, besides complexity measures used in [1], we also compared the number of bits and 1-bits required to represent arithmetic transform coefficients in zero polarity and optimal polarity arithmetic expressions. This is a complexity measure relevant for the alternative implementations of elementary functions suggested in [1]. Experimental results confirm that exploiting of FPARs may provide for considerable reduction in terms of the complexity measures considered.


2006 ◽  
Vol 18 (12) ◽  
pp. 2994-3008 ◽  
Author(s):  
Kei Uchizawa ◽  
Rodney Douglas ◽  
Wolfgang Maass

Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task, they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a 1 as output) during a computation. By contrast, the activity of neurons in the brain is much sparser, with only about 1% of neurons firing. This mismatch between threshold and neuronal circuits is due to the particular complexity measures (circuit size and circuit depth) that have been minimized in previous threshold circuit constructions. In this letter, we investigate a new complexity measure for threshold circuits, energy complexity, whose minimization yields computations with sparse activity. We prove that all computations by threshold circuits of polynomial size with entropy O(log n) can be restructured so that their energy complexity is reduced to a level near the entropy of circuit states. This entropy of circuit states is a novel circuit complexity measure, which is of interest not only in the context of threshold circuits but for circuit complexity in general. As an example of how this measure can be applied, we show that any polynomial size threshold circuit with entropy O(log n) can be simulated by a polynomial size threshold circuit of depth 3. Our results demonstrate that the structure of circuits that result from a minimization of their energy complexity is quite different from the structure that results from a minimization of previously considered complexity measures, and potentially closer to the structure of neural circuits in the nervous system. In particular, different pathways are activated in these circuits for different classes of inputs. This letter shows that such circuits with sparse activity have a surprisingly large computational power.


2011 ◽  
Vol 14 (2) ◽  
pp. 443-463 ◽  
Author(s):  
Saket Pande ◽  
Luis A. Bastidas ◽  
Sandjai Bhulai ◽  
Mac McKee

We provide analytical bounds on convergence rates for a class of hydrologic models and consequently derive a complexity measure based on the Vapnik–Chervonenkis (VC) generalization theory. The class of hydrologic models is a spatially explicit interconnected set of linear reservoirs with the aim of representing globally nonlinear hydrologic behavior by locally linear models. Here, by convergence rate, we mean convergence of the empirical risk to the expected risk. The derived measure of complexity measures a model's propensity to overfit data. We explore how data finiteness can affect model selection for this class of hydrologic model and provide theoretical results on how model performance on a finite sample converges to its expected performance as data size approaches infinity. These bounds can then be used for model selection, as the bounds provide a tradeoff between model complexity and model performance on finite data. The convergence bounds for the considered hydrologic models depend on the magnitude of their parameters, which are the recession parameters of constituting linear reservoirs. Further, the complexity of hydrologic models not only varies with the magnitude of their parameters but also depends on the network structure of the models (in terms of the spatial heterogeneity of parameters and the nature of hydrologic connectivity).


2011 ◽  
Vol 33 (2) ◽  
pp. 157 ◽  
Author(s):  
Tomomi Sakuragi

It is important for teachers and researchers to be able to assess L2 learners’ proficiency through their performance. The measures of complexity, accuracy, and fluency (CAF) have been used for over 30 years in L2 research to analyze language performance (Ellis & Barkhuizen, 2005; Housen & Kuiken, 2009; Wolfe-Quintero, Inagaki, & Kim, 1998). However, there remain unanswered questions about CAF measures. For example, the length measure (number of words per syntactic unit) needs to be investigated because it has been used inconsistently: Some researchers have used it as a syntactic complexity measure, while others have used it as a fluency measure. Koizumi (2005) pointed out this discrepancy as a serious problem because the interpretation of a single measure varies depending on researchers’ orientations. In addition, the results of factor analysis across some studies (e.g., Sheppard, 2004; Tavakoli & Skehan, 2005) showed that the measures of fluency could be divided into two types: speed and disfluency. These discrepancies in the construct are key issues pertaining to the measures and need to be investigated. Moreover, although CAF measures have often been investigated in Indo-European languages, they have not been sufficiently investigated in other languages; thus, it is important to determine whether CAF measures can be applied to the Japanese language in the same way. Accordingly, this study examined the construct validity of CAF measures in Japanese as a second language (JSL) from the following three perspectives. As expected, do the measures represent distinct factors of the three CAF dimensions? Does the length measure (the number of words per syntactic unit) have the same construct as the syntactic complexity measure does, rather than having that of the fluency measure? Can the speed measure (the number of words per minute) and the disfluency measure (the number of disfluency markers per minute) be explained as one construct? To investigate these research questions, 10 general measures were calculated from the narrative production of 113 university-level students learning JSL. Their Japanese language proficiency level ranged from intermediate to advanced on the ACTFL–OPI (American Council on the Teaching of Foreign Languages–Oral Proficiency Interview). Subsequently, a factor analysis was conducted to investigate the construct validity of CAF measures. The initial solution was extracted using the principal factor method, followed by Promax rotation. A three-factor solution was adopted using the Kaiser criterion of eigenvalues greater than one. The results of this factor analysis indicated the following. The validity of CAF measures was partially demonstrated in terms of syntactic complexity measures (number of clauses per Analysis of Speech unit [AS-unit], and number of subordinate clauses per AS-unit) and accuracy measures (percentage of error-free AS-units, number of errors per AS-unit, and number of errors per clause) but was not demonstrated in terms of lexical complexity measures (number of word types per 100 words, and the Guiraud index) or fluency measures (number of words per minute, and number of disfluency markers per minute). The length measure indicates syntactic complexity because of the high loading (.83) on the same factor with the general syntactic complexity measures. The speed measure and the disfluency measure did not have the same factor as one construct of the fluency, which in turn supports the findings of previous studies (Sheppard, 2004; Tavakoli & Skehan, 2005). The results of this study suggest that further research must be conducted to establish the validity of the fluency measure and the validity of the lexical complexity measure, especially for the Japanese language, which has an agglutinating morphology. 近年、複雑さ(complexity)、正確さ(accuracy)、流暢さ(fluency)の側面から言語運用を分析する指標(以下、CAF指標)が多くの研究で用いられているが、研究者によって独自の測定方法が用いられており、その妥当性は明らかではない。そこで、本研究では10種類の測定方法について、CAF指標の構成概念妥当性を検証するため、日本語学習者113名の発話データを用いて探索的因子分析を行った。 その結果、構文的複雑さ、正確さについては共通因子による妥当性が示されたが、流暢さは測定方法のタイプ(速さと非流暢さ)によって同じ構成概念を示すとは言えないことが分かった。さらに、構文的な単位に含まれる語数(長さ)については先行研究で解釈が割れていたが、分析結果では流暢さではなく構文的複雑さを示した。本研究の結果から、流暢さの指標と、日本語の語彙指標に対する検証が今後の課題として得られた。


2020 ◽  
Vol 6 (1) ◽  
pp. 148-163 ◽  
Author(s):  
Stefan Buijsman ◽  
Markus Pantsar

An important paradigm in modeling the complexity of mathematical tasks relies on computational complexity theory, in which complexity is measured through the resources (time, space) taken by a Turing machine to carry out the task. These complexity measures, however, are asymptotic and as such potentially a problematic fit when descriptively modeling mathematical tasks that involve small inputs. In this paper, we argue that empirical data on human arithmetical cognition implies that a more fine-grained complexity measure is needed to accurately study mental arithmetic tasks. We propose a computational model of mental integer addition that is sensitive to the relevant aspects of human arithmetical ability. We show that this model necessitates a two-part complexity measure, since the addition tasks consists of two qualitatively different stages: retrieval of addition facts and the (de)composition of multidigit numbers. Finally, we argue that the two-part complexity measure can be developed into a single response-time measure with the help of empirical study of the two stages.


2004 ◽  
Vol 15 (01) ◽  
pp. 41-55 ◽  
Author(s):  
LUCIAN ILIE ◽  
SHENG YU ◽  
KAIZHONG ZHANG

With ideas from data compression and combinatorics on words, we introduce a complexity measure for words, called repetition complexity, which quantifies the amount of repetition in a word. The repetition complexity of w, R (w), is defined as the smallest amount of space needed to store w when reduced by repeatedly applying the following procedure: n consecutive occurrences uu…u of the same subword u of w are stored as (u,n). The repetition complexity has interesting relations with well-known complexity measures, such as subword complexity, SUB , and Lempel-Ziv complexity, LZ . We have always R (w)≥ LZ (w) and could even be that the former is linear while the latter is only logarithmic; e.g., this happens for prefixes of certain infinite words obtained by iterated morphisms. An infinite word α being ultimately periodic is equivalent to: (i) [Formula: see text], (ii) [Formula: see text], and (iii) [Formula: see text]. De Bruijn words, well known for their high subword complexity, are shown to have almost highest repetition complexity; the precise complexity remains open. R (w) can be computed in time [Formula: see text] and it is open, and probably very difficult, to find fast algorithms.


2020 ◽  
Vol 76 (4) ◽  
pp. 534-548 ◽  
Author(s):  
Wolfgang Hornfeck

An extension is proposed of the Shannon entropy-based structural complexity measure introduced by Krivovichev, taking into account the geometric coordinational degrees of freedom a crystal structure has. This allows a discrimination to be made between crystal structures which share the same number of atoms in their reduced cells, yet differ in the number of their free parameters with respect to their fractional atomic coordinates. The strong additivity property of the Shannon entropy is used to shed light on the complexity measure of Krivovichev and how it gains complexity contributions due to single Wyckoff positions. Using the same property allows for combining the proposed coordinational complexity measure with Krivovichev's combinatorial one to give a unique quantitative descriptor of a crystal structure's configurational complexity. An additional contribution of chemical degrees of freedom is discussed, yielding an even more refined scheme of complexity measures which can be obtained from a crystal structure's description: the six C's of complexity.


2010 ◽  
Vol 24 (2) ◽  
pp. 131-135 ◽  
Author(s):  
Włodzimierz Klonowski ◽  
Pawel Stepien ◽  
Robert Stepien

Over 20 years ago, Watt and Hameroff (1987 ) suggested that consciousness may be described as a manifestation of deterministic chaos in the brain/mind. To analyze EEG-signal complexity, we used Higuchi’s fractal dimension in time domain and symbolic analysis methods. Our results of analysis of EEG-signals under anesthesia, during physiological sleep, and during epileptic seizures lead to a conclusion similar to that of Watt and Hameroff: Brain activity, measured by complexity of the EEG-signal, diminishes (becomes less chaotic) when consciousness is being “switched off”. So, consciousness may be described as a manifestation of deterministic chaos in the brain/mind.


Sign in / Sign up

Export Citation Format

Share Document