Applications of Knowledge Based Mean Fields in the Determination of Protein Structures

Author(s):  
Manfred J. Sippl ◽  
Markus Jaritz ◽  
Manfred Hendlich ◽  
Maria Ortner ◽  
Peter Lackner
2008 ◽  
Vol 21 (22) ◽  
pp. 5887-5903 ◽  
Author(s):  
P. R. Field ◽  
A. Gettelman ◽  
R. B. Neale ◽  
R. Wood ◽  
P. J. Rasch ◽  
...  

Abstract Identical composite analysis of midlatitude cyclones over oceanic regions has been carried out on both output from the NCAR Community Atmosphere Model, version 3 (CAM3) and multisensor satellite data. By focusing on mean fields associated with a single phenomenon, the ability of the CAM3 to reproduce realistic midlatitude cyclones is critically appraised. A number of perturbations to the control model were tested against observations, including a candidate new microphysics package for the CAM. The new microphysics removes the temperature-dependent phase determination of the old scheme and introduces representations of microphysical processes to convert from one phase to another and from cloud to precipitation species. By subsampling composite cyclones based on systemwide mean strength (mean wind speed) and systemwide mean moisture the authors believe they are able to make meaningful like-with-like comparisons between observations and model output. All variations of the CAM tested overestimate the optical thickness of high-topped clouds in regions of precipitation. Over a system as a whole, the model can both over- and underestimate total high-topped cloud amounts. However, systemwide mean rainfall rates and composite structure appear to be in broad agreement with satellite estimates. When cyclone strength is taken into account, changes in moisture and rainfall rates from both satellite-derived observations and model output as a function of changes in sea surface temperature are in accordance with the Clausius–Clapeyron equation. The authors find that the proposed new microphysics package shows improvement to composite liquid water path fields and cloud amounts.


Author(s):  
A. L. Semenov ◽  
V. I. Ershov ◽  
D. A. Gusarov

This paper deals with the concept of the translation approach to the problem of interaction of language and culture in terms of determination of the translation solutions by linguoethnic factors. The authors pay main attention to the analysis of the notion of culture. The concept proceeds from the views and opinions regarding the culture and its role in shaping the identity of the person introduced by the honorary doctor (doctor honoris cause) of the MGIMO-University Federico Major in his book «New page». Sharing the point of view of F. Major , the authors come to the conclusion that culture is a knowledge, based on which an individual perceives and evaluates his performance and behavior. Projecting such a position on the verbal behavior, the authors highlight the leading role of culture in the process of producing a speech act played when choosing the individual models of behavior on the basis of the knowledge of the communicative situation. Based on F. Mayor`s opinion that culture unites rather than divides people, the authors note the presence of universal and unique linguoethnic elements in the cultural knowledge of the representatives of various ethnic groups which determine the degree of similarities and differences in the ways of expressing knowledge in different languages. In this paper the authors reasonably use the term «linguoethnic» to describe the cultural-cognitive peculiarities inherent to individuals as representatives of different ethnic groups, as well as give comparison of the terms «linguoethnic» and «linguocultural».


2017 ◽  
Author(s):  
Iyanar Vetrivel ◽  
Swapnil Mahajan ◽  
Manoj Tyagi ◽  
Lionel Hoffmann ◽  
Yves-Henri Sanejouand ◽  
...  

AbstractLibraries of structural prototypes that abstract protein local structures are known as structural alphabets and have proven to be very useful in various aspects of protein structure analyses and predictions. One such library, Protein Blocks (PBs), is composed of 16 standard 5-residues long structural prototypes. This form of analyzing proteins involves drafting its structure as a string of PBs. Thus, predicting the local structure of a protein in terms of protein blocks is a step towards the objective of predicting its 3-D structure. Here a new approach, kPred, is proposed towards this aim that is independent of the evolutionary information available. It involves (i) organizing the structural knowledge in the form of a database of pentapeptide fragments extracted from all protein structures in the PDB and (ii) apply a purely knowledge-based algorithm, not relying on secondary structure predictions or sequence alignment profiles, to scan this database and predict most probable backbone conformations for the protein local structures.Based on the strategy used for scanning the database, the method was able to achieve efficient mean Q16 accuracies between 40.8% and 66.3% for a non-redundant subset of the PDB filtered at 30% sequence identity cut-off. The impact of these scanning strategies on the prediction was evaluated and is discussed. A scoring function that gives a good estimate of the accuracy of prediction was further developed. This score estimates very well the accuracy of the algorithm (R2 of 0.82). An online version of the tool is provided freely for non-commercial usage at http://www.bo-protscience.fr/kpred/.


1999 ◽  
Vol 55 (2) ◽  
pp. 305-313 ◽  
Author(s):  
Steven Chang ◽  
Teresa Head-Gordon ◽  
Robert M. Glaeser ◽  
Kenneth H. Downing

Scattering of electrons is affected by the distribution of valence electrons that participate in chemical bonding and thus change the electrostatic shielding of the nucleus. This effect is particularly significant for low-angle scattering. Thus, while chemical bonding effects are difficult to measure with small-unit cell materials, they can be substantial in the study of proteins by electron crystallography. This work investigates the magnitude of chemical bonding effects for a representative collection of protein fragments and a model ligand for nucleotide-binding proteins within the resolution range generally used in determining protein structures by electron crystallography. Electrostatic potentials were calculated by ab initio methods for both the test molecules and for superpositions of their free atoms. Differences in scattering amplitudes can be well over 10% in the resolution range below 5 Å and are especially large in the case of ionized side chains and ligands. We conclude that the use of molecule-based scattering factors can provide a much more accurate representation of the low-resolution data obtained in electron crystallographic studies. The comparison of neutral and ionic structure factors at resolutions below 5 Å can also provide a sensitive determination of charge states, important for biological function, that is not accessible from X-ray crystallographic measurements.


2021 ◽  
Author(s):  
Shalmali Bapat ◽  
Stefan O. Kilian ◽  
Hartmut Wiggers ◽  
Doris Segets

<p>A thorough understanding of complex interactions within particulate systems is a key for knowledge-based formulations. Hansen solubility parameters (HSP) are widely used to assess the compatibility of the dispersed phase with the continuous phase. At present, the determination of HSP is often based on a liquid ranking list obtained by evaluating a pertinent dispersion parameter using only one pre-selected characterization method. Furthermore, one cannot rule out the possibility of subjective judgment especially for liquids for which it is difficult to decipher the compatibility or underlying interactions. As a result, the end value of HSP might be of little or no information. To overcome these issues, we introduce a generalized technology-agnostic combinatorics-based approach. We discuss the principles of the approach and the implications of evaluating and reporting particle HSP values. We demonstrate the approach by using SiN<sub>x</sub> particles. We leverage the analytical centrifugation data to evaluate stability trajectories of SiN<sub>x</sub> dispersions in various liquids to deduce particle-liquid compatibility. </p>


Author(s):  
Majid Masso

A computational mutagenesis is detailed whereby each single residue substitution in a protein chain of primary sequence length N is represented as a sparse N-dimensional feature vector, whose M << N nonzero components locally quantify environmental perturbations occurring at the mutated position and its neighbors in the protein structure. The methodology makes use of both the Delaunay tessellation algorithm for representing protein structures, as well as a four-body, knowledge based, statistical contact potential. Feature vectors for each subset of mutants due to all possible residue substitutions at a particular position cohabit the same M-dimensional subspace, where the value of M and the identities of the M nonzero components are similarly position dependent. The approach is used to characterize a large experimental dataset of single residue substitutions in bacteriophage T4 lysozyme, each categorized as either unaffected or affected based on the measured level of mutant activity relative to that of the native protein. Performance of a single classifier trained with the collective set of mutants in N-space is compared to that of an ensemble of position-specific classifiers trained using disjoint mutant subsets residing in significantly smaller subspaces. Results suggest that significant improvements can be achieved through subspace modeling.


Sign in / Sign up

Export Citation Format

Share Document