shannon information entropy
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 17)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 22 (24) ◽  
pp. 13404
Author(s):  
Csaba Magyar ◽  
Anikó Mentes ◽  
Miklós Cserző ◽  
István Simon

Mutual Synergetic Folding (MSF) proteins belong to a recently discovered class of proteins. These proteins are disordered in their monomeric but ordered in their oligomeric forms. Their amino acid composition is more similar to globular proteins than to disordered ones. Our preceding work shed light on important structural aspects of the structural organization of these proteins, but the background of this behavior is still unknown. We suggest that solvent accessibility is an important factor, especially solvent accessibility of the peptide bonds can be accounted for this phenomenon. The side chains of the amino acids which form a peptide bond have a high local contribution to the shielding of the peptide bond from the solvent. During the oligomerization step, other non-local residues contribute to the shielding. We investigated these local and non-local effects of shielding based on Shannon information entropy calculations. We found that MSF and globular homodimeric proteins have different local contributions resulting from different amino acid pair frequencies. Their non-local distribution is also different because of distinctive inter-subunit contacts.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Yasuhiro Miyazawa ◽  
Hiromi Yasuda ◽  
Hyungkyu Kim ◽  
James H. Lynch ◽  
Kosei Tsujikawa ◽  
...  

AbstractOrigami, the ancient art of paper folding, has shown its potential as a versatile platform to design various reconfigurable structures. The designs of most origami-inspired architected materials rely on a periodic arrangement of identical unit cells repeated throughout the whole system. It is challenging to alter the arrangement once the design is fixed, which may limit the reconfigurable nature of origami-based structures. Inspired by phase transformations in natural materials, here we study origami tessellations that can transform between homogeneous configurations and highly heterogeneous configurations composed of different phases of origami unit cells. We find that extremely localized and reprogrammable heterogeneity can be achieved in our origami tessellation, which enables the control of mechanical stiffness and in-situ tunable locking behavior. To analyze this high reconfigurability and variable stiffness systematically, we employ Shannon information entropy. Our design and analysis strategy can pave the way for designing new types of transformable mechanical devices.


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1399
Author(s):  
Alexander M. Banaru ◽  
Sergey M. Aksenov ◽  
Sergey V. Krivovichev

Structural complexity measures based on Shannon information entropy are widely used for inorganic crystal structures. However, the application of these parameters for molecular crystals requires essential modification since atoms in inorganic compounds usually possess more degrees of freedom. In this work, a novel scheme for the calculation of complexity parameters (HmolNet, HmolNet,tot) for molecular crystals is proposed as a sum of the complexity of each molecule, the complexity of intermolecular contacts, and the combined complexity of both. This scheme is tested for several molecular crystal structures.


2021 ◽  
Author(s):  
Orion Dollar ◽  
Nisarg Joshi ◽  
David A. C. Beck ◽  
Jim Pfaendtner

<div> <div> <div> <p>We explore the impact of adding attention to generative VAE models for molecular design. Four model types are compared: a simple recurrent VAE (RNN), a recurrent VAE with an added attention layer (RNNAttn), a transformer VAE (TransVAE) and the previous state-of-the-art (MosesVAE). The models are assessed based on their effect on the organization of the latent space (i.e. latent memory) and their ability to generate samples that are valid and novel. Additionally, the Shannon information entropy is used to measure the complexity of the latent memory in an information bottleneck theoretical framework and we define a novel metric to assess the extent to which models explore chemical phase space. All three models are trained on millions of molecules from either the ZINC or PubChem datasets. We find that both RNNAttn and TransVAE models perform substantially better when tasked with accurately reconstructing input SMILES strings than the MosesVAE or RNN models, particularly for larger molecules up to ~700 Da. The TransVAE learns a complex “molecular grammar” that includes detailed molecular substructures and high-level structural and atomic relationships. The RNNAttn models learn the most efficient compression of the input data while still maintaining good performance. The complexity of the compressed representation learned by each model type increases in the order of MosesVAE < RNNAttn < RNN < TransVAE. We find that there is an unavoidable tradeoff between model exploration and validity that is a function of the complexity of the latent memory. However, novel sampling schemes may be used that optimize this tradeoff and allow us to utilize the information-dense representations learned by the transformer in spite of their complexity. </p> </div> </div> </div>


2021 ◽  
Author(s):  
Orion Dollar ◽  
Nisarg Joshi ◽  
David A. C. Beck ◽  
Jim Pfaendtner

<div> <div> <div> <p>We explore the impact of adding attention to generative VAE models for molecular design. Four model types are compared: a simple recurrent VAE (RNN), a recurrent VAE with an added attention layer (RNNAttn), a transformer VAE (TransVAE) and the previous state-of-the-art (MosesVAE). The models are assessed based on their effect on the organization of the latent space (i.e. latent memory) and their ability to generate samples that are valid and novel. Additionally, the Shannon information entropy is used to measure the complexity of the latent memory in an information bottleneck theoretical framework and we define a novel metric to assess the extent to which models explore chemical phase space. All three models are trained on millions of molecules from either the ZINC or PubChem datasets. We find that both RNNAttn and TransVAE models perform substantially better when tasked with accurately reconstructing input SMILES strings than the MosesVAE or RNN models, particularly for larger molecules up to ~700 Da. The TransVAE learns a complex “molecular grammar” that includes detailed molecular substructures and high-level structural and atomic relationships. The RNNAttn models learn the most efficient compression of the input data while still maintaining good performance. The complexity of the compressed representation learned by each model type increases in the order of MosesVAE < RNNAttn < RNN < TransVAE. We find that there is an unavoidable tradeoff between model exploration and validity that is a function of the complexity of the latent memory. However, novel sampling schemes may be used that optimize this tradeoff and allow us to utilize the information-dense representations learned by the transformer in spite of their complexity. </p> </div> </div> </div>


2020 ◽  
Vol 2 (3) ◽  
pp. 1-5
Author(s):  
Mostafa Allameh Zade ◽  
◽  
Iman Amiri ◽  

Localization and quantification of structural damages and find a failure probability is the key important in reliability assessment of structures. In this study, a Self-Organizing Neural Network (SONN) with Shannon Information Entropy simulation is used to reduce the computational effort required for reliability analysis and damage detection. To this end, one demonstrative structure is modeled and then several damage scenarios are defined. These scenarios are considered as training datasets for establishing a Self-Organizing Neural Network model. In this regard, the relation between structural responses (input) and structural stiffness (output) is established using Self-Organizing Neural Network models. The established SONN is more economical and achieves reasonable accuracy in detection of structural damages under ground motion. Furthermore, in order to assess the reliability of structure, five random variables are considered. Namely, columns’ area of first, second and third floor, elasticity modulus and gravity loads. The SONN is trained by Shannon Information Entropy simulation technique. Finally, the trained neural network specifies the failure probability of purposed structure. Although MCS can predict the failure probability for a given structure, the SONN model helps simulation techniques to receive an acceptable accuracy and reduce computational effort.


Sign in / Sign up

Export Citation Format

Share Document