scholarly journals Bayesian machine learning analysis of single-molecule fluorescence colocalization images

2021 ◽  
Author(s):  
Yerdos A. Ordabayev ◽  
Larry J. Friedman ◽  
Jeff Gelles ◽  
Douglas L. Theobald

AbstractMulti-wavelength single-molecule fluorescence colocalization (CoSMoS) methods allow elucidation of complex biochemical reaction mechanisms. However, analysis of CoSMoS data is intrinsically challenging because of low image signal-to-noise ratios, non-specific surface binding of the fluorescent molecules, and analysis methods that require subjective inputs to achieve accurate results. Here, we use Bayesian probabilistic programming to implement Tapqir, an unsupervised machine learning method based on a holistic, physics-based causal model of CoSMoS data. This method accounts for uncertainties in image analysis due to photon and camera noise, optical non-uniformities, non-specific binding, and spot detection. Rather than merely producing a binary “spot/no spot” classification of unspecified reliability, Tapqir objectively assigns spot classification probabilities that allow accurate downstream analysis of molecular dynamics, thermodynamics, and kinetics. We both quantitatively validate Tapqir performance against simulated CoSMoS image data with known properties and also demonstrate that it implements fully objective, automated analysis of experiment-derived data sets with a wide range of signal, noise, and non-specific binding characteristics.

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Thomas Kurmann ◽  
Siqing Yu ◽  
Pablo Márquez-Neila ◽  
Andreas Ebneter ◽  
Martin Zinkernagel ◽  
...  

Abstract In ophthalmology, retinal biological markers, or biomarkers, play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualize these, Optical Coherence Tomography (OCT) is often the tool of choice due to its ability to image retinal structures in three dimensions at micrometer resolution. But with widespread use in clinical routine, and growing prevalence in chronic retinal conditions, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research. We present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. Our approach avoids the need for costly segmentation annotations and allows scans to be characterized by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Daniela M. Borgmann ◽  
Sandra Mayr ◽  
Helene Polin ◽  
Susanne Schaller ◽  
Viktoria Dorfer ◽  
...  

2015 ◽  
Vol 43 (2) ◽  
pp. 172-178 ◽  
Author(s):  
Namita Bisaria ◽  
Daniel Herschlag

Structured RNA molecules play roles in central biological processes and understanding the basic forces and features that govern RNA folding kinetics and thermodynamics can help elucidate principles that underlie biological function. Here we investigate one such feature, the specific interaction of monovalent cations with a structured RNA, the P4–P6 domain of the Tetrahymena ribozyme. We employ single molecule FRET (smFRET) approaches as these allow determination of folding equilibrium and rate constants over a wide range of stabilities and thus allow direct comparisons without the need for extrapolation. These experiments provide additional evidence for specific binding of monovalent cations, Na+ and K+, to the RNA tetraloop–tetraloop receptor (TL–TLR) tertiary motif. These ions facilitate both folding and unfolding, consistent with an ability to help order the TLR for binding and further stabilize the tertiary contact subsequent to attainment of the folding transition state.


2020 ◽  
Author(s):  
Sviatlana Shashkova ◽  
Thomas Nyström ◽  
Mark C Leake ◽  
Adam JM Wollman

AbstractMost cells adapt to their environment by switching combinations of genes on and off through a complex interplay of transcription factor proteins (TFs). The mechanisms by which TFs respond to signals, move into the nucleus and find specific binding sites in target genes is still largely unknown. Single-molecule fluorescence microscopes, which can image single TFs in live cells, have begun to elucidate the problem. Here, we show that different environmental signals, in this case carbon sources, yield a unique single-molecule fluorescence pattern of foci of a key metabolic regulating transcription factor, Mig1, in the nucleus of the budding yeast, Saccharomyces cerevisiae. This pattern serves as a ‘barcode’ of the gene regulatory state of the cells which can be correlated with cell growth characteristics and other biological function.HighlightsSingle-molecule microscopy of transcription factors in live yeastBarcoding single-molecule nuclear fluorescenceCorrelation with cell growth characteristicsGrowth in different carbon sources


2020 ◽  
Author(s):  
Moritz Lürig ◽  
Seth Donoughe ◽  
Erik Svensson ◽  
Arthur Porto ◽  
Masahito Tsuboi

For centuries, ecologists and evolutionary biologists have used images such as drawings, paintings, and photographs to record and quantify the shapes and patterns of life. With the advent of digital imaging, biologists continue to collect image data at an ever-increasing rate. This immense body of data provides insight into a wide range of biological phenomena, including phenotypic trait diversity, population dynamics, mechanisms of divergence and adaptation and evolutionary change. However, the rate of image acquisition frequently outpaces our capacity to manually extract meaningful information from the images. Moreover, manual image analysis is low-throughput, difficult to reproduce, and typically measures only a few traits at a time. This has proven to be an impediment to the growing field of phenomics - the study of many phenotypic dimensions together. Computer vision (CV), the automated extraction and processing of information from digital images, is a way to alleviate this longstanding analytical bottleneck. In this review, we illustrate the capabilities of CV for fast, comprehensive, and reproducible image analysis in ecology and evolution. First, we briefly review phenomics, arguing that ecologists and evolutionary biologists can most effectively capture phenomic-level data by using CV. Next, we describe the primary types of image-based data, and review CV approaches for extracting them (including techniques that entail machine learning and others that do not). We identify common hurdles and pitfalls, and then highlight recent successful implementations of CV in the study of ecology and evolution. Finally, we outline promising future applications for CV in biology. We anticipate that CV will become a basic component of the biologist’s toolkit, further enhancing data quality and quantity, and sparking changes in how empirical ecological and evolutionary research will be conducted.


2020 ◽  
Author(s):  
Victor Anton ◽  
Jannes Germishuys ◽  
Matthias Obst

This paper describes a data system to analyse large amounts of subsea movie data for marine ecological research. The system consists of three distinct modules for data management and archiving, citizen science, and machine learning in a high performance computation environment. It allows scientists to upload underwater footage to a customised citizen science website hosted by Zooniverse, where volunteers from the public classify the footage. Classifications with high agreement among citizen scientists are then used to train machine learning algorithms. An application programming interface allows researchers to test the algorithms and track biological objects in new footage. We tested our system using recordings from remotely operated vehicles (ROVs) in a Marine Protected Area, the Kosterhavet National Park in Sweden. Results indicate a strong decline of cold-water corals in the park over a period of 15 years, showing that our system allows to effectively extract valuable occurrence and abundance data for key ecological species from underwater footage. We argue that the combination of citizen science tools, machine learning, and high performance computers are key to successfully analyse large amounts of image data in the future, suggesting that these services should be consolidated and interlinked by national and international research infrastructures. Novel information system to analyse marine underwater footage.


2020 ◽  
Author(s):  
Yujing Song ◽  
Jingyang Zhao ◽  
Tao Cai ◽  
Shiuan-Haur Su ◽  
Erin Sandford ◽  
...  

AbstractSerial measurement of a large panel of protein biomarkers near the bedside could provide a promising pathway to transform the critical care of acutely ill patients. However, attaining the combination of high sensitivity and multiplexity with a short assay turnaround poses a formidable technological challenge. Here, we developed a rapid, accurate, and highly multiplexed microfluidic digital immunoassay by incorporating machine learning-based autonomous image analysis. The assay achieved 14-plexed biomarker detection at concentrations < 10pg/mL with a sample volume < 10 μL, including all processes from sampling to analyzed data delivery within 30 min, while only requiring a 5-min assay incubation. The assay procedure applied both a spatial-spectral microfluidic encoding scheme and an image data analysis algorithm based on machine learning with a convolutional neural network (CNN) for pre-equilibrated single-molecule protein digital counting. This unique approach remarkably reduced errors facing the high-capacity multiplexing of digital immunoassay at low protein concentrations. Longitudinal data obtained for a panel of 14 serum cytokines in human patients receiving chimeric antigen receptor-T (CAR-T) cell therapy manifested the powerful biomarker profiling capability and great potential of the assay for its translation to near-real-time bedside immune status monitoring.


2021 ◽  
Author(s):  
Andrew Imrie ◽  

Cement bond log interpretation methods consist of human pattern recognition and evaluation of the quality of the downhole isolation. Typically, a log interpreter compares acquisition data to their predefined classifications of cement bond quality. This paper outlines a complementary technique of intelligent cement evaluation and the implementation of the analysis of cement evaluation data by utilizing automatic pattern matching and machine learning. The proposed method is capable of defining bond quality across multiple distinct subclassification through analysis of image data using pattern recognition. Libraries of real log responses are used as comparisons to input data, and additionally may be supplemented with synthetic data. Using machine learning and image-based pattern recognition, the bond quality is classified into succinct categories to determine the presence of channeling. Successful classifications of the input data can then be added to the libraries, thus improving future analysis through an iterative process. The system uses the outputs of a conventional azimuthal ultrasonic scanning cement evaluation log and 5-ft CBL waveform to conclude a cement bond interpretation. The 5-ft CBL waveform is an optional addition to the processand improves the interpretation. The system searches forsimilarities between the acquisition data and thatcontained in the library. These similarities are comparedto evaluate the bonding. The process is described in two parts: i) image collection and library classification and ii) pattern recognition and interpretation. The former is the process of generating a readable library of reference data from historical cement evaluation logs and laboratory measurements and the latter is the machine learning and comparison method. Example results are shown with good correlations between automated analysis and interpreter analysis. The system is shown to be particularly capable at the automated identification of channeling of varying sizes, something which would be a challenge when using only the scalar curve representation of azimuthal data. Previously published methodologies for automated classification of bond quality typically utilize scaler data whereas this approach utilizes image-based pattern recognition for automated, learning and intelligent cement evaluation (ALICE). A discussion is presented on the limitations and merits of the ALICE process which include quality control, the removal of analyst bias during interpretation, and the fact that such a system will continually improve in accuracy through supervised training.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6667
Author(s):  
Seungah Lee ◽  
Indra Batjikh ◽  
Seong Ho Kang

The natural characteristics of deoxyribonucleic acid (DNA) enable its advanced applications in nanotechnology as a special tool that can be detected by high-resolution imaging with precise localization. Super-resolution (SR) microscopy enables the examination of nanoscale molecules beyond the diffraction limit. With the development of SR microscopy methods, DNA nanostructures can now be optically assessed. Using the specific binding of fluorophores with their target molecules, advanced single-molecule localization microscopy (SMLM) has been expanded into different fields, allowing wide-range detection at the single-molecule level. This review discusses the recent progress in the SR imaging of DNA nano-objects using SMLM techniques, such as direct stochastic optical reconstruction microscopy, binding-activated localization microscopy, and point accumulation for imaging nanoscale topography. Furthermore, we discuss their advantages and limitations, present applications, and future perspectives.


2021 ◽  
Vol 9 ◽  
Author(s):  
Moritz D. Lürig ◽  
Seth Donoughe ◽  
Erik I. Svensson ◽  
Arthur Porto ◽  
Masahito Tsuboi

For centuries, ecologists and evolutionary biologists have used images such as drawings, paintings and photographs to record and quantify the shapes and patterns of life. With the advent of digital imaging, biologists continue to collect image data at an ever-increasing rate. This immense body of data provides insight into a wide range of biological phenomena, including phenotypic diversity, population dynamics, mechanisms of divergence and adaptation, and evolutionary change. However, the rate of image acquisition frequently outpaces our capacity to manually extract meaningful information from images. Moreover, manual image analysis is low-throughput, difficult to reproduce, and typically measures only a few traits at a time. This has proven to be an impediment to the growing field of phenomics – the study of many phenotypic dimensions together. Computer vision (CV), the automated extraction and processing of information from digital images, provides the opportunity to alleviate this longstanding analytical bottleneck. In this review, we illustrate the capabilities of CV as an efficient and comprehensive method to collect phenomic data in ecological and evolutionary research. First, we briefly review phenomics, arguing that ecologists and evolutionary biologists can effectively capture phenomic-level data by taking pictures and analyzing them using CV. Next we describe the primary types of image-based data, review CV approaches for extracting them (including techniques that entail machine learning and others that do not), and identify the most common hurdles and pitfalls. Finally, we highlight recent successful implementations and promising future applications of CV in the study of phenotypes. In anticipation that CV will become a basic component of the biologist’s toolkit, our review is intended as an entry point for ecologists and evolutionary biologists that are interested in extracting phenotypic information from digital images.


Sign in / Sign up

Export Citation Format

Share Document