high throughput experiments
Recently Published Documents


TOTAL DOCUMENTS

161
(FIVE YEARS 76)

H-INDEX

18
(FIVE YEARS 4)

Biomolecules ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 140
Author(s):  
Georgios N. Dimitrakopoulos ◽  
Maria I. Klapa ◽  
Nicholas K. Moschonas

After more than fifteen years from the first high-throughput experiments for human protein–protein interaction (PPI) detection, we are still wondering how close the completion of the genome-scale human PPI network reconstruction is, what needs to be further explored and whether the biological insights gained from the holistic investigation of the current network are valid and useful. The unique structure of PICKLE, a meta-database of the human experimentally determined direct PPI network developed by our group, presently covering ~80% of the UniProtKB/Swiss-Prot reviewed human complete proteome, enables the evaluation of the interactome expansion by comparing the successive PICKLE releases since 2013. We observe a gradual overall increase of 39%, 182%, and 67% in protein nodes, PPIs, and supporting references, respectively. Our results indicate that, in recent years, (a) the PPI addition rate has decreased, (b) the new PPIs are largely determined by high-throughput experiments and mainly concern existing protein nodes and (c), as we had predicted earlier, most of the newly added protein nodes have a low degree. These observations, combined with a largely overlapping k-core between PICKLE releases and a network density increase, imply that an almost complete picture of a structurally defined network has been reached. The comparative unsupervised application of two clustering algorithms indicated that exploring the full interactome topology can reveal the protein neighborhoods involved in closely related biological processes as transcriptional regulation, cell signaling and multiprotein complexes such as the connexon complex associated with cancers. A well-reconstructed human protein interactome is a powerful tool in network biology and medicine research forming the basis for multi-omic and dynamic analyses.


2021 ◽  
Author(s):  
Benjamin L de Bivort ◽  
Seaan M Buchanan ◽  
Kyobi J Skutt-Kakaria ◽  
Erika Gajda ◽  
Chelsea J O'Leary ◽  
...  

Individual animals behave differently from each other. This variability is a component of personality and arises even when genetics and environment are held constant. Discovering the biological mechanisms underlying behavioral variability depends on efficiently measuring individual behavioral bias, a requirement that is facilitated by automated, high-throughput experiments. We compiled a large data set of individual locomotor behavior measures, acquired from over 183,000 fruit flies walking in Y-shaped mazes. With this data set we first conducted a "computational ethology natural history" study to quantify the distribution of individual behavioral biases with unprecedented precision and examine correlations between behavioral measures with high power. We discovered a slight, but highly significant, left-bias in spontaneous locomotor decision-making. We then used the data to evaluate standing hypotheses about biological mechanisms affecting behavioral variability, specifically: the neuromodulator serotonin and its precursor transporter, heterogametic sex, and temperature. We found a variety of significant effects associated with each of these mechanisms that were behavior-dependent. This indicates that the relationship between biological mechanisms and behavioral variability may be highly context dependent. Going forward, automation of behavioral experiments will likely be essential in teasing out the complex causality of individuality.


2021 ◽  
Author(s):  
Stephanie Eugenie Brigitte McArdle ◽  
Kinana Habra ◽  
Joshua R D Pearson

Monolayer cell culture, while useful for basic in vitro studies, are not physiologically relevant. Spheroids, on the other hand provide a more complex 3-dimensional (3D) structure which more resemble the in vivo tumour growth thereby allowing results obtained with those on proliferation, cell death, differentiation, metabolism, and various anti-tumour therapies to be more predictive of in vivo outcomes. However, the cost associated with their generation often involve expensive, plate, media, and growth supplements, which have limited their use for high throughput experiments. The protocol herein presents a novel and rapid generation for single spheroids of various cancer cell lines, U87 MG; SEBTA-027; SF188, brain cancer cells, DU-145, TRAMP-C1, prostate cancer cells, in 96-round bottom well plates. Cells are washed with anti-adherent solution, and the homogeneous compact spheroid morphology was evidenced as early as 24 hours after 10 minutes centrifugation for the seeded cells. By using confocal microscopy, the proliferating cells were traced in the rim and the dead cells were found inside the core region of the spheroid. The H&E stain of spheroid slices and the western blotting were utilised to investigate the tightness of the cell packaging by adhesion proteins. Carnosine was used as an example of treatment for U87 single spheroids. The protocol allows the rapid generation of spheroids, which will help towards reducing the number of tests performed on animals.


Pharmaceutics ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2135
Author(s):  
Elena Lietta ◽  
Alessandro Pieri ◽  
Elisa Innocenti ◽  
Roberto Pisano ◽  
Marco Vanni ◽  
...  

Chromatography is a widely used separation process for purification of biopharmaceuticals that is able to obtain high purities and concentrations. The phenomena that occur during separation, mass transfer and adsorption are quite complex. To better understand these phenomena and their mechanisms, multi-component adsorption isotherms must be investigated. High-throughput methodologies are a very powerful tool to determine adsorption isotherms and they waste very small amounts of sample and chemicals, but the quantification of component concentrations is a real bottleneck in multi-component isotherm determination. The behavior of bovine serum albumin, Corynebacterium diphtheriae CRM197 protein and lysozyme, selected as model proteins in binary mixtures with hydrophobic resin, is investigated here. In this work we propose a new method for determining multi-component adsorption isotherms using high-throughput experiments with filter plates, by exploiting microfluidic capillary electrophoresis. The precision and accuracy of the microfluidic capillary electrophoresis platform were evaluated in order to assess the procedure; they were both found to be high and the procedure is thus reliable in determining adsorption isotherms for binary mixtures. Multi-component adsorption isotherms were determined with a totally high-throughput procedure that turned out to be a very fast and powerful tool. The same procedure can be applied to every kind of high-throughput screening.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Qiaohao Liang ◽  
Aldair E. Gongora ◽  
Zekun Ren ◽  
Armi Tiihonen ◽  
Zhe Liu ◽  
...  

AbstractBayesian optimization (BO) has been leveraged for guiding autonomous and high-throughput experiments in materials science. However, few have evaluated the efficiency of BO across a broad range of experimental materials domains. In this work, we quantify the performance of BO with a collection of surrogate model and acquisition function pairs across five diverse experimental materials systems. By defining acceleration and enhancement metrics for materials optimization objectives, we find that surrogate models such as Gaussian Process (GP) with anisotropic kernels and Random Forest (RF) have comparable performance in BO, and both outperform the commonly used GP with isotropic kernels. GP with anisotropic kernels has demonstrated the most robustness, yet RF is a close alternative and warrants more consideration because it is free from distribution assumptions, has smaller time complexity, and requires less effort in initial hyperparameter selection. We also raise awareness about the benefits of using GP with anisotropic kernels in future materials optimization campaigns.


2021 ◽  
Author(s):  
Angel Fernando Cisneros Caballero ◽  
Francois D. Rouleau ◽  
Carla Bautista ◽  
Pascale Lemieux ◽  
Nathan Dumont-Leblond

Synthetic biology aims to engineer biological circuits, which often involve gene expression. A particularly promising group of regulatory elements are riboswitches because of their versatility with respect to their targets, but early synthetic designs were not as attractive because of a reduced dynamic range with respect to protein regulators. Only recently, the creation of toehold switches helped overcome this obstacle by also providing an unprecedented degree of orthogonality. However, a lack of automated design and optimization tools prevents the widespread and effective use of toehold switches in high-throughput experiments. To address this, we developed Toeholder, a comprehensive open-source software for toehold design and in silico benchmarking. Toeholder takes into consideration sequence constraints as well as data derived from molecular dynamics simulations of a toehold switch. We describe the software and its in silico validation results, as well as its potential applications and impacts on the management and design of toehold switches.


2021 ◽  
Vol 92 (11) ◽  
pp. 114104
Author(s):  
Anandvinod Dalmiya ◽  
Jai M. Mehta ◽  
Robert S. Tranter ◽  
Patrick T. Lynch

2021 ◽  
Author(s):  
Ho Yin Yuen ◽  
Jesper Jansson

Abstract Background: Protein-protein interaction (PPI) data is an important type of data used in functional genomics. However, inaccuracies in high-throughput experiments often result in incomplete PPI data. Computational techniques are thus used to infer missing data and to evaluate confidence scores, with link prediction being one such approach that uses the structure of the network of PPIs known so far to find good candidates for missing PPIs. Recently, a new idea called the L3 principle introduced biological motivation into PPI link predictions, yielding predictors that are superior to general-purpose link predictors for complex networks. However, the previously developed L3 principle-based link predictors are only an approximate implementation of the L3 principle. As such, not only is the full potential of the L3 principle not realized, they may even lead to candidate PPIs that otherwise fit the L3 principle being penalized. Result: In this article, we propose a formulation of link predictors without approximation that we call ExactL3 (L3E) by addressing missing elements within L3 predictors in the perspective of network modeling. Through statistical and biological metrics, we show that in general, L3E predictors perform better than the previously proposed methods on seven datasets across two organisms (human and yeast) using a reasonable amount of computation time. In addition to L3E being able to rank the PPIs more accurately, we also found that L3-based predictors, including L3E, predicted a different pool of real PPIs than the general-purpose link predictors. This suggests that different types of PPIs can be predicted based on different topological assumptions and that even better PPI link predictors may be obtained in the future by improved network modeling.


2021 ◽  
Author(s):  
Francois Charih ◽  
Kyle K. Biggar ◽  
James R. Green

Engineering peptides to achieve a desired therapeutic effect through the inhibition of a specific target activity or protein interaction is a non-trivial task. Few of the existing in silico peptide design algorithms generate target-specific peptides. Instead, many methods produce peptides that achieve a desired effect through an unknown mechanism. In contrast with resource-intensive high-throughput experiments, in silico screening is a cost-effective alternative that can prune the space of candidates when engineering target-specific peptides. Using a set of FDA-approved peptides we curated specifically for this task, we assess the applicability of several sequence-based protein-protein interaction predictors as a screening tool within the context of peptide therapeutic engineering. We show that similarity-based protein-protein interaction predictors are more suitable for this purpose than the current state-of-the-art deep learning methods. We also show that this approach is mostly useful when designing new peptides against targets for which naturally-occurring interactors are already known, and that deploying it for de novo peptide engineering tasks may require gathering additional target-specific training data. Taken together, this work offers evidence that supports the use of similarity-based protein-protein interaction predictors for peptide therapeutic engineering, especially peptide analogs.


Sign in / Sign up

Export Citation Format

Share Document