Complexity, Emergence and Molecular Diversity via Information Theory

Author(s):  
Francisco Torrens ◽  
Gloria Castellano

Numerous definitions for complexity have been proposed with little consensus. The definition here is related to Kolmogorov complexity and Shannon entropy measures. However, the price is to introduce context dependence into the definition of complexity. Such context dependence is an inherent property of complexity. Scientists are uncomfortable with such context dependence that smacks of subjectivity, which is the reason why little agreement is found on the meaning of the terms. In an article published in Molecules, Lin presented a novel approach for assessing molecular diversity based on Shannon information theory. A set of compounds is viewed as a static collection of microstates that can register information about their environment. The method is characterized by a strong tendency to oversample remote areas of the feature space and produce unbalanced designs. This chapter demonstrates the limitation with some simple examples and provides a rationale for the failure to produce results that are consistent.

2021 ◽  
Vol 5 (1) ◽  
pp. 38
Author(s):  
Chiara Giola ◽  
Piero Danti ◽  
Sandro Magnani

In the age of AI, companies strive to extract benefits from data. In the first steps of data analysis, an arduous dilemma scientists have to cope with is the definition of the ’right’ quantity of data needed for a certain task. In particular, when dealing with energy management, one of the most thriving application of AI is the consumption’s optimization of energy plant generators. When designing a strategy to improve the generators’ schedule, a piece of essential information is the future energy load requested by the plant. This topic, in the literature it is referred to as load forecasting, has lately gained great popularity; in this paper authors underline the problem of estimating the correct size of data to train prediction algorithms and propose a suitable methodology. The main characters of this methodology are the Learning Curves, a powerful tool to track algorithms performance whilst data training-set size varies. At first, a brief review of the state of the art and a shallow analysis of eligible machine learning techniques are offered. Furthermore, the hypothesis and constraints of the work are explained, presenting the dataset and the goal of the analysis. Finally, the methodology is elucidated and the results are discussed.


Robotica ◽  
1991 ◽  
Vol 9 (2) ◽  
pp. 203-212 ◽  
Author(s):  
Won Jang ◽  
Kyungjin Kim ◽  
Myungjin Chung ◽  
Zeungnam Bien

SUMMARYFor efficient visual servoing of an “eye-in-hand” robot, the concepts of Augmented Image Space and Transformed Feature Space are presented in the paper. A formal definition of image features as functionals is given along with a technique to use defined image features for visual servoing. Compared with other known methods, the proposed concepts reduce the computational burden for visual feedback, and enhance the flexibility in describing the vision-based task. Simulations and real experiments demonstrate that the proposed concepts are useful and versatile tools for the industrial robot vision tasks, and thus the visual servoing problem can be dealt with more systematically.


2018 ◽  
Author(s):  
Caroline Fecher ◽  
Laura Trovò ◽  
Stephan A. Müller ◽  
Nicolas Snaidero ◽  
Jennifer Wettmarshausen ◽  
...  

AbstractMitochondria vary in morphology and function in different tissues, however little is known about their molecular diversity among cell types. To investigate mitochondrial diversity in vivo, we developed an efficient protocol to isolate cell type-specific mitochondria based on a new MitoTag mouse. We profiled the mitochondrial proteome of three major neural cell types in cerebellum and identified a substantial number of differential mitochondrial markers for these cell types in mice and humans. Based on predictions from these proteomes, we demonstrate that astrocytic mitochondria metabolize long-chain fatty acids more efficiently than neurons. Moreover, we identified Rmdn3 as a major determinant of ER-mitochondria proximity in Purkinje cells. Our novel approach enables exploring mitochondrial diversity on the functional and molecular level in many in vivo contexts.


2021 ◽  
Author(s):  
Iñigo Apaolaza ◽  
Edurne San José-Enériz ◽  
Luis Valcarcel ◽  
Xabier Agirre ◽  
Felipe Prosper ◽  
...  

Synthetic Lethality (SL) is a promising concept in cancer research. A number of computational methods have been developed to predict SL in cancer metabolism, among which our network-based computational approach, based on genetic Minimal Cut Sets (gMCSs), can be found. A major challenge of these approaches to SL is to systematically consider tumor environment, which is particularly relevant in cancer metabolism. Here, we propose a novel definition of SL for cancer metabolism that integrates genetic interactions and nutrient availability in the environment. We extend our gMCSs approach to determine this new family of metabolic synthetic lethal interactions. A computational and experimental proof-of-concept is presented for predicting the lethality of dihydrofolate reductase inhibition in different environments. Finally, our novel approach is applied to identify extracellular nutrient dependences of tumor cells, elucidating cholesterol and myo-inositol depletion as potential vulnerabilities in different malignancies.


2021 ◽  
Vol 9 ◽  
Author(s):  
Ted Sichelman

Many scholars have employed the term “entropy” in the context of law and legal systems to roughly refer to the amount of “uncertainty” present in a given law, doctrine, or legal system. Just a few of these scholars have attempted to formulate a quantitative definition of legal entropy, and none have provided a precise formula usable across a variety of legal contexts. Here, relying upon Claude Shannon's definition of entropy in the context of information theory, I provide a quantitative formalization of entropy in delineating, interpreting, and applying the law. In addition to offering a precise quantification of uncertainty and the information content of the law, the approach offered here provides other benefits. For example, it offers a more comprehensive account of the uses and limits of “modularity” in the law—namely, using the terminology of Henry Smith, the use of legal “boundaries” (be they spatial or intangible) that “economize on information costs” by “hiding” classes of information “behind” those boundaries. In general, much of the “work” performed by the legal system is to reduce legal entropy by delineating, interpreting, and applying the law, a process that can in principle be quantified.


Author(s):  
Robert Mertens ◽  
Po-Sen Huang ◽  
Luke Gottlieb ◽  
Gerald Friedland ◽  
Ajay Divakaran ◽  
...  

A video’s soundtrack is usually highly correlated to its content. Hence, audio-based techniques have recently emerged as a means for video concept detection complementary to visual analysis. Most state-of-the-art approaches rely on manual definition of predefined sound concepts such as “ngine sounds,” “utdoor/indoor sounds.” These approaches come with three major drawbacks: manual definitions do not scale as they are highly domain-dependent, manual definitions are highly subjective with respect to annotators and a large part of the audio content is omitted since the predefined concepts are usually found only in a fraction of the soundtrack. This paper explores how unsupervised audio segmentation systems like speaker diarization can be adapted to automatically identify low-level sound concepts similar to annotator defined concepts and how these concepts can be used for audio indexing. Speaker diarization systems are designed to answer the question “ho spoke when?”by finding segments in an audio stream that exhibit similar properties in feature space, i.e., sound similar. Using a diarization system, all the content of an audio file is analyzed and similar sounds are clustered. This article provides an in-depth analysis on the statistic properties of similar acoustic segments identified by the diarization system in a predefined document set and the theoretical fitness of this approach to discern one document class from another. It also discusses how diarization can be tuned in order to better reflect the acoustic properties of general sounds as opposed to speech and introduces a proof-of-concept system for multimedia event classification working with diarization-based indexing.


Author(s):  
Valentina Dragos

Supporting anomaly analysis in the maritime field is a challenging problem because of the dynamic nature of the task: the definition of abnormal or suspicious behaviour is subject to change and depends on user interests. This paper provides a novel approach to support anomaly analysis in the maritime domain through the exploration of large collections of interpretation reports. Based on observables or more sophisticated patterns, the approach provides information retrieval strategies going from basic facts retrieval that guide short-term corrective actions to more complex networks of related concepts that help domain experts to understand or to explain abnormal vessel behaviours. Semantic integration is used to link various information sources, by using a commonly adopted standard. The paper seeks to explore different aspects of using information retrieval to support the analysis and interpretation of abnormal vessel behaviours for maritime surveillance.


Author(s):  
Ioannis N. Kouris

Research in association rules mining has initially concentrated in solving the obvious problem of finding positive association rules; that is rules among items that exist in the stored transactions. It was only several years after that the possibility of finding also negative association rules became especially appealing and was investigated. Nevertheless researchers based their assumptions regarding negative association rules on the absence of items from transactions. This assumption though besides being dubious, since it equated the absence of an item with a conflict or negative effect on the rest items, it also brought out a series of computational problems with the amount of possible patterns that had to be examined and analyzed. In this work we give an overview of the works having engaged with the subject until now and present a novel view for the definition of negative influence among items.


Sign in / Sign up

Export Citation Format

Share Document