scholarly journals Comparison of automated crystallographic model-building pipelines

2019 ◽  
Vol 75 (12) ◽  
pp. 1119-1128 ◽  
Author(s):  
Emad Alharbi ◽  
Paul S. Bond ◽  
Radu Calinescu ◽  
Kevin Cowtan

A comparison of four protein model-building pipelines (ARP/wARP, Buccaneer, PHENIX AutoBuild and SHELXE) was performed using data sets from 202 experimentally phased cases, both with the data as observed and truncated to simulate lower resolutions. All pipelines were run using default parameters. Additionally, an ARP/wARP run was completed using models from Buccaneer. All pipelines achieved nearly complete protein structures and low R work/R free at resolutions between 1.2 and 1.9 Å, with PHENIX AutoBuild and ARP/wARP producing slightly lower R factors. At lower resolutions, Buccaneer leads to significantly more complete models.

2020 ◽  
Vol 76 (9) ◽  
pp. 814-823 ◽  
Author(s):  
Emad Alharbi ◽  
Radu Calinescu ◽  
Kevin Cowtan

For the last two decades, researchers have worked independently to automate protein model building, and four widely used software pipelines have been developed for this purpose: ARP/wARP, Buccaneer, Phenix AutoBuild and SHELXE. Here, the usefulness of combining these pipelines to improve the built protein structures by running them in pairwise combinations is examined. The results show that integrating these pipelines can lead to significant improvements in structure completeness and R free. In particular, running Phenix AutoBuild after Buccaneer improved structure completeness for 29% and 75% of the data sets that were examined at the original resolution and at a simulated lower resolution, respectively, compared with running Phenix AutoBuild on its own. In contrast, Phenix AutoBuild alone produced better structure completeness than the two pipelines combined for only 7% and 3% of these data sets.


2014 ◽  
Vol 70 (7) ◽  
pp. 1994-2006 ◽  
Author(s):  
Rocco Caliandro ◽  
Benedetta Carrozzini ◽  
Giovanni Luca Cascarano ◽  
Giuliana Comunale ◽  
Carmelo Giacovazzo ◽  
...  

Phasing proteins at non-atomic resolution is still a challenge for anyab initiomethod. A variety of algorithms [Patterson deconvolution, superposition techniques, a cross-correlation function (Cmap), theVLD(vive la difference) approach, the FF function, a nonlinear iterative peak-clipping algorithm (SNIP) for defining the background of a map and thefree lunchextrapolation method] have been combined to overcome the lack of experimental information at non-atomic resolution. The method has been applied to a large number of protein diffraction data sets with resolutions varying from atomic to 2.1 Å, with the condition that S or heavier atoms are present in the protein structure. The applications include the use ofARP/wARPto check the quality of the final electron-density maps in an objective way. The results show that resolution is still the maximum obstacle to protein phasing, but also suggest that the solution of protein structures at 2.1 Å resolution is a feasible, even if still an exceptional, task for the combined set of algorithms implemented in the phasing program. The approach described here is more efficient than the previously described procedures:e.g.the combined use of the algorithms mentioned above is frequently able to provide phases of sufficiently high quality to allow automatic model building. The method is implemented in the current version ofSIR2014.


IUCrJ ◽  
2018 ◽  
Vol 5 (5) ◽  
pp. 585-594 ◽  
Author(s):  
Bart van Beusekom ◽  
Krista Joosten ◽  
Maarten L. Hekkelman ◽  
Robbie P. Joosten ◽  
Anastassis Perrakis

Inherent protein flexibility, poor or low-resolution diffraction data or poorly defined electron-density maps often inhibit the building of complete structural models during X-ray structure determination. However, recent advances in crystallographic refinement and model building often allow completion of previously missing parts. This paper presents algorithms that identify regions missing in a certain model but present in homologous structures in the Protein Data Bank (PDB), and `graft' these regions of interest. These new regions are refined and validated in a fully automated procedure. Including these developments in the PDB-REDO pipeline has enabled the building of 24 962 missing loops in the PDB. The models and the automated procedures are publicly available through the PDB-REDO databank and webserver. More complete protein structure models enable a higher quality public archive but also a better understanding of protein function, better comparison between homologous structures and more complete data mining in structural bioinformatics projects.


2020 ◽  
Vol 76 (6) ◽  
pp. 531-541
Author(s):  
Soon Wen Hoh ◽  
Tom Burnley ◽  
Kevin Cowtan

This work focuses on the use of the existing protein-model-building software Buccaneer to provide structural interpretation of electron cryo-microscopy (cryo-EM) maps. Originally developed for application to X-ray crystallography, the necessary steps to optimise the usage of Buccaneer with cryo-EM maps are shown. This approach has been applied to the data sets of 208 cryo-EM maps with resolutions of better than 4 Å. The results obtained also show an evident improvement in the sequencing step when the initial reference map and model used for crystallographic cases are replaced by a cryo-EM reference. All other necessary changes to settings in Buccaneer are implemented in the model-building pipeline from within the CCP-EM interface (as of version 1.4.0).


Author(s):  
Emad Alharbi ◽  
Paul Bond ◽  
Radu Calinescu ◽  
Kevin Cowtan

Proteins are macromolecules that perform essential biological functions which depend on their three-dimensional structure. Determining this structure involves complex laboratory and computational work. For the computational work, multiple software pipelines have been developed to build models of the protein structure from crystallographic data. Each of these pipelines performs differently depending on the characteristics of the electron-density map received as input. Identifying the best pipeline to use for a protein structure is difficult, as the pipeline performance differs significantly from one protein structure to another. As such, researchers often select pipelines that do not produce the best possible protein models from the available data. Here, a software tool is introduced which predicts key quality measures of the protein structures that a range of pipelines would generate if supplied with a given crystallographic data set. These measures are crystallographic quality-of-fit indicators based on included and withheld observations, and structure completeness. Extensive experiments carried out using over 2500 data sets show that the tool yields accurate predictions for both experimental phasing data sets (at resolutions between 1.2 and 4.0 Å) and molecular-replacement data sets (at resolutions between 1.0 and 3.5 Å). The tool can therefore provide a recommendation to the user concerning the pipelines that should be run in order to proceed most efficiently to a depositable model.


2018 ◽  
Author(s):  
Bart van Beusekom ◽  
Krista Joosten ◽  
Maarten L. Hekkelman ◽  
Robbie P. Joosten ◽  
Anastassis Perrakis

AbstractInherent protein flexibility, poor or low-resolution diffraction data, or poor electron density maps, often inhibit building complete structural models during X-ray structure determination. However, advances in crystallographic refinement and model building nowadays often allow to complete previously missing parts. Here, we present algorithms that identify regions missing in a certain model but present in homologous structures in the Protein Data Bank (PDB), and “graft” these regions of interest. These new regions are refined and validated in a fully automated procedure. Including these developments in our PDB-REDO pipeline, allowed to build 24,962 missing loops in the PDB. The models and the automated procedures are publically available through the PDB-REDO databank and web server (https://pdb-redo.eu). More complete protein structure models enable a higher quality public archive, but also a better understanding of protein function, better comparison between homologous structures, and more complete data mining in structural bioinformatics projects.SynopsisThousands of missing regions in existing protein structure models are completed using new methods based on homology.


2010 ◽  
Vol 66 (9) ◽  
pp. 1012-1023 ◽  
Author(s):  
František Pavelčík ◽  
Jiří Václavík

The automatic building of protein structures with tripeptidic and tetrapeptidic fragments was investigated. The oligopeptidic conformers were positioned in the electron-density map by a phased rotation, conformation and translation function and refined by a real-space refinement. The number of successfully located fragments lay within the interval 75–95% depending on the resolution and phase quality. The overlaps of partially located fragments were analyzed. The correctly positioned fragments were connected into chains. Chains formed in this way were extended directly into the electron density and a sequence was assigned. In the initial stage of the model building the number of located fragments was between 60% and 95%, but this number could be increased by several cycles of reciprocal-space refinement and automatic model rebuilding. A nearly complete structure can be obtained on the condition that the resolution is reasonable. Computer graphics will only be needed for a final check and small corrections.


2020 ◽  
Author(s):  
Lim Heo ◽  
Collin Arbour ◽  
Michael Feig

Protein structures provide valuable information for understanding biological processes. Protein structures can be determined by experimental methods such as X-ray crystallography, nuclear magnetic resonance (NMR) spectroscopy, or cryogenic electron microscopy. As an alternative, in silico methods can be used to predict protein structures. Those methods utilize protein structure databases for structure prediction via template-based modeling or for training machine-learning models to generate predictions. Structure prediction for proteins distant from proteins with known structures often results in lower accuracy with respect to the true physiological structures. Physics-based protein model refinement methods can be applied to improve model accuracy in the predicted models. Refinement methods rely on conformational sampling around the predicted structures, and if structures closer to the native states are sampled, improvements in the model quality become possible. Molecular dynamics simulations have been especially successful for improving model qualities but although consistent refinement can be achieved, the improvements in model qualities are still moderate. To extend the refinement performance of a simulation-based protocol, we explored new schemes that focus on an optimized use of biasing functions and the application of increased simulation temperatures. In addition, we tested the use of alternative initial models so that the simulations can explore conformational space more broadly. Based on the insight of this analysis we are proposing a new refinement protocol that significantly outperformed previous state-of-the-art molecular dynamics simulation-based protocols in the benchmark tests described here. <br>


2012 ◽  
Author(s):  
Kate C. Miller ◽  
Lindsay L. Worthington ◽  
Steven Harder ◽  
Scott Phillips ◽  
Hans Hartse ◽  
...  

2021 ◽  
Vol 13 (13) ◽  
pp. 2433
Author(s):  
Shu Yang ◽  
Fengchao Peng ◽  
Sibylle von Löwis ◽  
Guðrún Nína Petersen ◽  
David Christian Finger

Doppler lidars are used worldwide for wind monitoring and recently also for the detection of aerosols. Automatic algorithms that classify the lidar signals retrieved from lidar measurements are very useful for the users. In this study, we explore the value of machine learning to classify backscattered signals from Doppler lidars using data from Iceland. We combined supervised and unsupervised machine learning algorithms with conventional lidar data processing methods and trained two models to filter noise signals and classify Doppler lidar observations into different classes, including clouds, aerosols and rain. The results reveal a high accuracy for noise identification and aerosols and clouds classification. However, precipitation detection is underestimated. The method was tested on data sets from two instruments during different weather conditions, including three dust storms during the summer of 2019. Our results reveal that this method can provide an efficient, accurate and real-time classification of lidar measurements. Accordingly, we conclude that machine learning can open new opportunities for lidar data end-users, such as aviation safety operators, to monitor dust in the vicinity of airports.


Sign in / Sign up

Export Citation Format

Share Document