scholarly journals IceBear: an intuitive and versatile web application for research-data tracking from crystallization experiment to PDB deposition

2021 ◽  
Vol 77 (2) ◽  
pp. 151-163
Author(s):  
Ed Daniel ◽  
Mirko M. Maksimainen ◽  
Neil Smith ◽  
Ville Ratas ◽  
Ekaterina Biterova ◽  
...  

The web-based IceBear software is a versatile tool to monitor the results of crystallization experiments and is designed to facilitate supervisor and student communications. It also records and tracks all relevant information from crystallization setup to PDB deposition in protein crystallography projects. Fully automated data collection is now possible at several synchrotrons, which means that the number of samples tested at the synchrotron is currently increasing rapidly. Therefore, the protein crystallography research communities at the University of Oulu, Weizmann Institute of Science and Diamond Light Source have joined forces to automate the uploading of sample metadata to the synchrotron. In IceBear, each crystal selected for data collection is given a unique sample name and a crystal page is generated. Subsequently, the metadata required for data collection are uploaded directly to the ISPyB synchrotron database by a shipment module, and for each sample a link to the relevant ISPyB page is stored. IceBear allows notes to be made for each sample during cryocooling treatment and during data collection, as well as in later steps of the structure determination. Protocols are also available to aid the recycling of pins, pucks and dewars when the dewar returns from the synchrotron. The IceBear database is organized around projects, and project members can easily access the crystallization and diffraction metadata for each sample, as well as any additional information that has been provided via the notes. The crystal page for each sample connects the crystallization, diffraction and structural information by providing links to the IceBear drop-viewer page and to the ISPyB data-collection page, as well as to the structure deposited in the Protein Data Bank.

Author(s):  
Weiping Liu ◽  
Jennifer Fung ◽  
W.J. de Ruijter ◽  
Hans Chen ◽  
John W. Sedat ◽  
...  

Electron tomography is a technique where many projections of an object are collected from the transmission electron microscope (TEM), and are then used to reconstruct the object in its entirety, allowing internal structure to be viewed. As vital as is the 3-D structural information and with no other 3-D imaging technique to compete in its resolution range, electron tomography of amorphous structures has been exercised only sporadically over the last ten years. Its general lack of popularity can be attributed to the tediousness of the entire process starting from the data collection, image processing for reconstruction, and extending to the 3-D image analysis. We have been investing effort to automate all aspects of electron tomography. Our systems of data collection and tomographic image processing will be briefly described.To date, we have developed a second generation automated data collection system based on an SGI workstation (Fig. 1) (The previous version used a micro VAX). The computer takes full control of the microscope operations with its graphical menu driven environment. This is made possible by the direct digital recording of images using the CCD camera.


Author(s):  
Andrés Baena-Raya ◽  
Manuel A. Rodríguez-Pérez ◽  
Pedro Jiménez-Reyes ◽  
Alberto Soriano-Maldonado

Sprint running and change of direction (COD) present similar mechanical demands, involving an acceleration phase in which athletes need to produce and apply substantial horizontal external force. Assessing the mechanical properties underpinning individual sprint acceleration might add relevant information about COD performance in addition to that obtained through sprint time alone. The present technical report uses a case series of three athletes with nearly identical 20 m sprint times but with different mechanical properties and COD performances. This makes it possible to illustrate, for the first time, a potential rationale for why the sprint force-velocity (FV) profile (i.e., theoretical maximal force (F0), velocity (V0), maximal power output (Pmax), ratio of effective horizontal component (RFpeak) and index of force application technique (DRF)) provides key information about COD performance (i.e., further to that derived from simple sprint time), which can be used to individualize training. This technical report provides practitioners with a justification to assess the FV profile in addition to sprint time when the aim is to enhance sprint acceleration and COD performance; practical interpretations and advice on how training interventions could be individualized based on the athletes’ differential sprint mechanical properties are also specified.


Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


Structure ◽  
2000 ◽  
Vol 8 (12) ◽  
pp. R243-R246 ◽  
Author(s):  
Steven W Muchmore ◽  
Jeffrey Olson ◽  
Ronald Jones ◽  
Jeff Pan ◽  
Michael Blum ◽  
...  

2012 ◽  
Vol 03 (02) ◽  
pp. 1250007 ◽  
Author(s):  
JÜRGEN EICHBERGER ◽  
ANI GUERDJIKOVA

We present a model of technological adaptation in response to a change in climate conditions. The main feature of the model is that new technologies are not just risky, but also ambiguous. Pessimistic agents are thus averse to adopting a new technology. Learning is induced by optimists, who are willing to try out technologies about which there is little evidence available. We show that both optimists and pessimists are crucial for a successful adaptation. While optimists provide the public good of information which gives pessimists an incentive to innovate, pessimists choose the new technology persistently in the long-run which increases the average returns for the society. Hence, the optimal share of optimists in the society is strictly positive. When the share of optimists in the society is too low, innovation is slow and the obtained steady-state is inefficient. We discuss two policies which can potentially alleviate this inefficiency: Subsidies and provision of additional information. We show that if precise and relevant information is available, pessimists would be willing to pay for it and consequently adopt the new technology. Hence, providing information might be a more efficient policy, which is both self-financing and results in better social outcomes.


2015 ◽  
Vol 12 (2) ◽  
pp. 104-118 ◽  
Author(s):  
Frank T. Bergmann ◽  
Nicolas Rodriguez ◽  
Nicolas Le Novère

Summary Several standard formats have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result.The Open Modeling EXchange format (OMEX) supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, an optional metadata file, and the files describing the model. The manifest is an XML file listing all files included in the archive and their type. The metadata file provides additional information about the archive and its content. Although any format can be used, we recommend an XML serialization of the Resource Description Framework.Together with the other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails.


2016 ◽  
Vol 49 (1) ◽  
pp. 302-310 ◽  
Author(s):  
Michael Kachala ◽  
John Westbrook ◽  
Dmitri Svergun

Recent advances in small-angle scattering (SAS) experimental facilities and data analysis methods have prompted a dramatic increase in the number of users and of projects conducted, causing an upsurge in the number of objects studied, experimental data available and structural models generated. To organize the data and models and make them accessible to the community, the Task Forces on SAS and hybrid methods for the International Union of Crystallography and the Worldwide Protein Data Bank envisage developing a federated approach to SAS data and model archiving. Within the framework of this approach, the existing databases may exchange information and provide independent but synchronized entries to users. At present, ways of exchanging information between the various SAS databases are not established, leading to possible duplication and incompatibility of entries, and limiting the opportunities for data-driven research for SAS users. In this work, a solution is developed to resolve these issues and provide a universal exchange format for the community, based on the use of the widely adopted crystallographic information framework (CIF). The previous version of the sasCIF format, implemented as an extension of the core CIF dictionary, has been available since 2000 to facilitate SAS data exchange between laboratories. The sasCIF format has now been extended to describe comprehensively the necessary experimental information, results and models, including relevant metadata for SAS data analysis and for deposition into a database. Processing tools for these files (sasCIFtools) have been developed, and these are available both as standalone open-source programs and integrated into the SAS Biological Data Bank, allowing the export and import of data entries as sasCIF files. Software modules to save the relevant information directly from beamline data-processing pipelines in sasCIF format are also developed. This update of sasCIF and the relevant tools are an important step in the standardization of the way SAS data are presented and exchanged, to make the results easily accessible to users and to promote further the application of SAS in the structural biology community.


2013 ◽  
Vol 203-204 ◽  
pp. 42-47
Author(s):  
Albert Prodan ◽  
Herman J.P. van Midden ◽  
Erik Zupanič ◽  
Rok Žitko

Charge density wave (CDW) ordering in NbSe3 and the structurally related quasi one-dimensional compounds is reconsidered. Since the modulated ground state is characterized by unstable nano-domains, the structural information obtained from diffraction experiments is to be supplemented by some additional information from a method, able to reveal details on a unit cell level. Low-temperature (LT) scanning tunneling microscopy (STM) can resolve both, the local atomic structure and the superimposed charge density modulation. It is shown that the established model for NbSe3 with two incommensurate (IC) modes, q1 = (0,0.241,0) and q2 = (0.5,0.260,0.5), locked in at T1=144K and T2=59K and separately confined to two of the three available types of bi-capped trigonal prismatic (BCTP) columns, must be modified. The alternative explanation is based on the existence of modulated layered nano-domains and is in good accord with the available LT STM results. These confirm i.a. the presence of both IC modes above the lower CDW transition temperature. Two BCTP columns, belonging to a symmetry-related pair, are as a rule alternatively modulated by the two modes. Such pairs of columns are ordered into unstable layered nano-domains, whose q1 and q2 sub-layers are easily interchanged. The mutually interchangeable sections of the two unstable IC modes keep a temperature dependent long-range ordering. Both modes can formally be replaced by a single highly inharmonic long-period commensurate CDW.


2014 ◽  
Vol 70 (a1) ◽  
pp. C491-C491
Author(s):  
Jürgen Haas ◽  
Alessandro Barbato ◽  
Tobias Schmidt ◽  
Steven Roth ◽  
Andrew Waterhouse ◽  
...  

Computational modeling and prediction of three-dimensional macromolecular structures and complexes from their sequence has been a long standing goal in structural biology. Over the last two decades, a paradigm shift has occurred: starting from a large "knowledge gap" between the huge number of protein sequences compared to a small number of experimentally known structures, today, some form of structural information – either experimental or computational – is available for the majority of amino acids encoded by common model organism genomes. Methods for structure modeling and prediction have made substantial progress of the last decades, and template based homology modeling techniques have matured to a point where they are now routinely used to complement experimental techniques. However, computational modeling and prediction techniques often fall short in accuracy compared to high-resolution experimental structures, and it is often difficult to convey the expected accuracy and structural variability of a specific model. Retrospectively assessing the quality of blind structure prediction in comparison to experimental reference structures allows benchmarking the state-of-the-art in structure prediction and identifying areas which need further development. The Critical Assessment of Structure Prediction (CASP) experiment has for the last 20 years assessed the progress in the field of protein structure modeling based on predictions for ca. 100 blind prediction targets per experiment which are carefully evaluated by human experts. The "Continuous Model EvaluatiOn" (CAMEO) project aims to provide a fully automated blind assessment for prediction servers based on weekly pre-released sequences of the Protein Data Bank PDB. CAMEO has been made possible by the development of novel scoring methods such as lDDT, which are robust against domain movements to allow for automated continuous structure comparison without human intervention.


Sign in / Sign up

Export Citation Format

Share Document