scholarly journals A system for multiplexed selection of aptamers with exquisite specificity without counter-selection

2021 ◽  
Author(s):  
Alex Yoshikawa ◽  
Leighton Wan ◽  
Liwei Zheng ◽  
Michael Eisenstein ◽  
Hyongsok Tom Soh

Aptamers have proven to be valuable tools for the detection of small molecules due to their remarkable ability to specifically discriminate between structurally similar molecules. Most aptamer selection efforts have relied on counter-selection to eliminate aptamers that exhibit unwanted cross-reactivity to interferents or structurally similar relatives to the target of interest. However, because the affinity and specificity characteristics of an aptamer library are fundamentally unknowable a priori, it is not possible to determine the optimal counter-selection parameters. As a result, counter-selection experiments require trial-and-error approaches that are inherently inefficient and may not result in aptamers with the best combination of affinity and specificity. In this work, we describe a high-throughput screening process for generating high-specificity aptamers to multiple targets in parallel, while also eliminating the need for counter-selection. We employ a platform based on a modified benchtop sequencer to conduct a massively-parallel aptamer screening process that enables the selection of highly-specific aptamers against multiple structurally similar molecules in a single experiment, without any counter-selection. As a demonstration, we have selected aptamers with high affinity and exquisite specificity for three structurally similar kynurenine metabolites that differ by a single hydroxyl group in a single selection experiment. This process can easily be adapted to other small-molecule analytes, and should greatly accelerate the development of aptamer reagents that achieve exquisite specificity for their target analytes.

2021 ◽  
Author(s):  
Zuzana Svobodová ◽  
Jakub Novotny ◽  
Barbora Ospalkova ◽  
Marcela Slovakova ◽  
Zuzana Bilkova ◽  
...  

The key factor in the development of antibody-based assays is to find an antibody that has an appropriate affinity, high specificity, and low cross-reactivity. However, this task is not easy...


2019 ◽  
Author(s):  
Guillaume A Rousselet

Most statistical inferences in psychology are based on frequentist statistics, which rely on sampling distributions: the long-run outcomes of multiple experiments, given a certain model. Yet, sampling distributions are poorly understood and rarely explicitly considered when making inferences. In this article, I demonstrate how to use simulations to illustrate sampling distributions to answer simple practical questions: for instance, if we could run thousands of experiments, what would the outcome look like? What do these simulations tell us about the results from a single experiment? Such simulations can be run a priori, given expected results, or a posteriori, using existing datasets. Both approaches can help make explicit the data generating process and the sources of variability; they also reveal the large variability in our experimental estimation and lead to the sobering realisation that, in most situations, we should not make a big deal out of results from a single experiment. Simulations can also help demonstrate how the selection of effect sizes conditional on some arbitrary cut-off (p≤0.05) leads to a literature crammed with false positives, a powerful illustration of the damage done in part by researchers’ over-confidence in their statistical tools. The article focuses on graphical descriptions and covers examples using correlation analyses, percent correct data and response latency data.


Author(s):  
Maria A. Milkova

Nowadays the process of information accumulation is so rapid that the concept of the usual iterative search requires revision. Being in the world of oversaturated information in order to comprehensively cover and analyze the problem under study, it is necessary to make high demands on the search methods. An innovative approach to search should flexibly take into account the large amount of already accumulated knowledge and a priori requirements for results. The results, in turn, should immediately provide a roadmap of the direction being studied with the possibility of as much detail as possible. The approach to search based on topic modeling, the so-called topic search, allows you to take into account all these requirements and thereby streamline the nature of working with information, increase the efficiency of knowledge production, avoid cognitive biases in the perception of information, which is important both on micro and macro level. In order to demonstrate an example of applying topic search, the article considers the task of analyzing an import substitution program based on patent data. The program includes plans for 22 industries and contains more than 1,500 products and technologies for the proposed import substitution. The use of patent search based on topic modeling allows to search immediately by the blocks of a priori information – terms of industrial plans for import substitution and at the output get a selection of relevant documents for each of the industries. This approach allows not only to provide a comprehensive picture of the effectiveness of the program as a whole, but also to visually obtain more detailed information about which groups of products and technologies have been patented.


2017 ◽  
Vol 6 (2) ◽  
pp. 5256
Author(s):  
Daryoush Shafiei ◽  
Prof. Basavaiah*

In mulberry (Morus spp.), the process of selection of promising hybrids from F1 population requires the screening of a large number of progenies and a long period. To develop a simple and faster approach for screening, studies were conducted using F1 seeds of two crosses. The details of screening studies conducted in relation to seed-size and seedling-size are reported separately in two parts. In this part, the F1 seeds were size-graded as small, medium and large seeds; their progenies were raised separately and screened in nursery. There was a considerable degree of variation in size of seeds and medium-size class seeds were in high percentage in both the crosses. The length, width and weight of seeds were also varied between the seed size classes significantly in both the crosses. The seed size classes differ with high significance in shoot length and Root collar diameter and also differ significantly in root length and weight of seedlings. The positive correlation between the seed size and growth of seedlings, seed size and germination, seed size and seedling survival in nursery indicated that size-grading of seeds and rejection of small seeds in the beginning of screening process may help to increase the efficiency of screening by increasing the chances of getting superior hybrids from limited progenies. However, confirmation on the performance of large seedlings from small seed size class may help to draw conclusion. Hence, the studies are continued with size- grading of seedlings in the next part of screening study.


Author(s):  
Laure Fournier ◽  
Lena Costaridou ◽  
Luc Bidaut ◽  
Nicolas Michoux ◽  
Frederic E. Lecouvet ◽  
...  

Abstract Existing quantitative imaging biomarkers (QIBs) are associated with known biological tissue characteristics and follow a well-understood path of technical, biological and clinical validation before incorporation into clinical trials. In radiomics, novel data-driven processes extract numerous visually imperceptible statistical features from the imaging data with no a priori assumptions on their correlation with biological processes. The selection of relevant features (radiomic signature) and incorporation into clinical trials therefore requires additional considerations to ensure meaningful imaging endpoints. Also, the number of radiomic features tested means that power calculations would result in sample sizes impossible to achieve within clinical trials. This article examines how the process of standardising and validating data-driven imaging biomarkers differs from those based on biological associations. Radiomic signatures are best developed initially on datasets that represent diversity of acquisition protocols as well as diversity of disease and of normal findings, rather than within clinical trials with standardised and optimised protocols as this would risk the selection of radiomic features being linked to the imaging process rather than the pathology. Normalisation through discretisation and feature harmonisation are essential pre-processing steps. Biological correlation may be performed after the technical and clinical validity of a radiomic signature is established, but is not mandatory. Feature selection may be part of discovery within a radiomics-specific trial or represent exploratory endpoints within an established trial; a previously validated radiomic signature may even be used as a primary/secondary endpoint, particularly if associations are demonstrated with specific biological processes and pathways being targeted within clinical trials. Key Points • Data-driven processes like radiomics risk false discoveries due to high-dimensionality of the dataset compared to sample size, making adequate diversity of the data, cross-validation and external validation essential to mitigate the risks of spurious associations and overfitting. • Use of radiomic signatures within clinical trials requires multistep standardisation of image acquisition, image analysis and data mining processes. • Biological correlation may be established after clinical validation but is not mandatory.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Mohamed Ramadan ◽  
Muna Alariqi ◽  
Yizan Ma ◽  
Yanlong Li ◽  
Zhenping Liu ◽  
...  

Abstract Background Upland cotton (Gossypium hirsutum), harboring a complex allotetraploid genome, consists of A and D sub-genomes. Every gene has multiple copies with high sequence similarity that makes genetic, genomic and functional analyses extremely challenging. The recent accessibility of CRISPR/Cas9 tool provides the ability to modify targeted locus efficiently in various complicated plant genomes. However, current cotton transformation method targeting one gene requires a complicated, long and laborious regeneration process. Hence, optimizing strategy that targeting multiple genes is of great value in cotton functional genomics and genetic engineering. Results To target multiple genes in a single experiment, 112 plant development-related genes were knocked out via optimized CRISPR/Cas9 system. We optimized the key steps of pooled sgRNAs assembly method by which 116 sgRNAs pooled together into 4 groups (each group consisted of 29 sgRNAs). Each group of sgRNAs was compiled in one PCR reaction which subsequently went through one round of vector construction, transformation, sgRNAs identification and also one round of genetic transformation. Through the genetic transformation mediated Agrobacterium, we successfully generated more than 800 plants. For mutants identification, Next Generation Sequencing technology has been used and results showed that all generated plants were positive and all targeted genes were covered. Interestingly, among all the transgenic plants, 85% harbored a single sgRNA insertion, 9% two insertions, 3% three different sgRNAs insertions, 2.5% mutated sgRNAs. These plants with different targeted sgRNAs exhibited numerous combinations of phenotypes in plant flowering tissues. Conclusion All targeted genes were successfully edited with high specificity. Our pooled sgRNAs assembly offers a simple, fast and efficient method/strategy to target multiple genes in one time and surely accelerated the study of genes function in cotton.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Aliaksei Vasilevich ◽  
Aurélie Carlier ◽  
David A. Winkler ◽  
Shantanu Singh ◽  
Jan de Boer

AbstractNatural evolution tackles optimization by producing many genetic variants and exposing these variants to selective pressure, resulting in the survival of the fittest. We use high throughput screening of large libraries of materials with differing surface topographies to probe the interactions of implantable device coatings with cells and tissues. However, the vast size of possible parameter design space precludes a brute force approach to screening all topographical possibilities. Here, we took inspiration from Nature to optimize materials surface topographies using evolutionary algorithms. We show that successive cycles of material design, production, fitness assessment, selection, and mutation results in optimization of biomaterials designs. Starting from a small selection of topographically designed surfaces that upregulate expression of an osteogenic marker, we used genetic crossover and random mutagenesis to generate new generations of topographies.


Sign in / Sign up

Export Citation Format

Share Document