scholarly journals Computer vision, machine learning, and the promise of phenomics in ecology and evolutionary biology

2020 ◽  
Author(s):  
Moritz Lürig ◽  
Seth Donoughe ◽  
Erik Svensson ◽  
Arthur Porto ◽  
Masahito Tsuboi

For centuries, ecologists and evolutionary biologists have used images such as drawings, paintings, and photographs to record and quantify the shapes and patterns of life. With the advent of digital imaging, biologists continue to collect image data at an ever-increasing rate. This immense body of data provides insight into a wide range of biological phenomena, including phenotypic trait diversity, population dynamics, mechanisms of divergence and adaptation and evolutionary change. However, the rate of image acquisition frequently outpaces our capacity to manually extract meaningful information from the images. Moreover, manual image analysis is low-throughput, difficult to reproduce, and typically measures only a few traits at a time. This has proven to be an impediment to the growing field of phenomics - the study of many phenotypic dimensions together. Computer vision (CV), the automated extraction and processing of information from digital images, is a way to alleviate this longstanding analytical bottleneck. In this review, we illustrate the capabilities of CV for fast, comprehensive, and reproducible image analysis in ecology and evolution. First, we briefly review phenomics, arguing that ecologists and evolutionary biologists can most effectively capture phenomic-level data by using CV. Next, we describe the primary types of image-based data, and review CV approaches for extracting them (including techniques that entail machine learning and others that do not). We identify common hurdles and pitfalls, and then highlight recent successful implementations of CV in the study of ecology and evolution. Finally, we outline promising future applications for CV in biology. We anticipate that CV will become a basic component of the biologist’s toolkit, further enhancing data quality and quantity, and sparking changes in how empirical ecological and evolutionary research will be conducted.

2021 ◽  
Vol 9 ◽  
Author(s):  
Moritz D. Lürig ◽  
Seth Donoughe ◽  
Erik I. Svensson ◽  
Arthur Porto ◽  
Masahito Tsuboi

For centuries, ecologists and evolutionary biologists have used images such as drawings, paintings and photographs to record and quantify the shapes and patterns of life. With the advent of digital imaging, biologists continue to collect image data at an ever-increasing rate. This immense body of data provides insight into a wide range of biological phenomena, including phenotypic diversity, population dynamics, mechanisms of divergence and adaptation, and evolutionary change. However, the rate of image acquisition frequently outpaces our capacity to manually extract meaningful information from images. Moreover, manual image analysis is low-throughput, difficult to reproduce, and typically measures only a few traits at a time. This has proven to be an impediment to the growing field of phenomics – the study of many phenotypic dimensions together. Computer vision (CV), the automated extraction and processing of information from digital images, provides the opportunity to alleviate this longstanding analytical bottleneck. In this review, we illustrate the capabilities of CV as an efficient and comprehensive method to collect phenomic data in ecological and evolutionary research. First, we briefly review phenomics, arguing that ecologists and evolutionary biologists can effectively capture phenomic-level data by taking pictures and analyzing them using CV. Next we describe the primary types of image-based data, review CV approaches for extracting them (including techniques that entail machine learning and others that do not), and identify the most common hurdles and pitfalls. Finally, we highlight recent successful implementations and promising future applications of CV in the study of phenotypes. In anticipation that CV will become a basic component of the biologist’s toolkit, our review is intended as an entry point for ecologists and evolutionary biologists that are interested in extracting phenotypic information from digital images.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 930
Author(s):  
Fahimeh Hadavimoghaddam ◽  
Mehdi Ostadhassan ◽  
Ehsan Heidaryan ◽  
Mohammad Ali Sadri ◽  
Inna Chapanova ◽  
...  

Dead oil viscosity is a critical parameter to solve numerous reservoir engineering problems and one of the most unreliable properties to predict with classical black oil correlations. Determination of dead oil viscosity by experiments is expensive and time-consuming, which means developing an accurate and quick prediction model is required. This paper implements six machine learning models: random forest (RF), lightgbm, XGBoost, multilayer perceptron (MLP) neural network, stochastic real-valued (SRV) and SuperLearner to predict dead oil viscosity. More than 2000 pressure–volume–temperature (PVT) data were used for developing and testing these models. A huge range of viscosity data were used, from light intermediate to heavy oil. In this study, we give insight into the performance of different functional forms that have been used in the literature to formulate dead oil viscosity. The results show that the functional form f(γAPI,T), has the best performance, and additional correlating parameters might be unnecessary. Furthermore, SuperLearner outperformed other machine learning (ML) algorithms as well as common correlations that are based on the metric analysis. The SuperLearner model can potentially replace the empirical models for viscosity predictions on a wide range of viscosities (any oil type). Ultimately, the proposed model is capable of simulating the true physical trend of the dead oil viscosity with variations of oil API gravity, temperature and shear rate.


2019 ◽  
Vol 11 (10) ◽  
pp. 1181 ◽  
Author(s):  
Norman Kerle ◽  
Markus Gerke ◽  
Sébastien Lefèvre

The 6th biennial conference on object-based image analysis—GEOBIA 2016—took place in September 2016 at the University of Twente in Enschede, The Netherlands (see www [...]


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e4088 ◽  
Author(s):  
Malia A. Gehan ◽  
Noah Fahlgren ◽  
Arash Abbasi ◽  
Jeffrey C. Berry ◽  
Steven T. Callen ◽  
...  

Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.


2016 ◽  
Vol 2016 ◽  
pp. 1-13
Author(s):  
Lei Ye ◽  
Can Wang ◽  
Xin Xu ◽  
Hui Qian

Sparse models have a wide range of applications in machine learning and computer vision. Using a learned dictionary instead of an “off-the-shelf” one can dramatically improve performance on a particular dataset. However, learning a new one for each subdataset (subject) with fine granularity may be unwarranted or impractical, due to restricted availability subdataset samples and tremendous numbers of subjects. To remedy this, we consider the dictionary customization problem, that is, specializing an existing global dictionary corresponding to the total dataset, with the aid of auxiliary samples obtained from the target subdataset. Inspired by observation and then deduced from theoretical analysis, a regularizer is employed penalizing the difference between the global and the customized dictionary. By minimizing the sum of reconstruction errors of the above regularizer under sparsity constraints, we exploit the characteristics of the target subdataset contained in the auxiliary samples while maintaining the basic sketches stored in the global dictionary. An efficient algorithm is presented and validated with experiments on real-world data.


2020 ◽  
Author(s):  
BRUCE HARDY ◽  
ANNA D'ENTREMONT ◽  
MICHAEL MARTINEZ-RODRIGUEZ ◽  
BRENDA GARCIA-DIAZ ◽  
LINDSAY ROY ◽  
...  

2021 ◽  
Author(s):  
Yerdos A. Ordabayev ◽  
Larry J. Friedman ◽  
Jeff Gelles ◽  
Douglas L. Theobald

AbstractMulti-wavelength single-molecule fluorescence colocalization (CoSMoS) methods allow elucidation of complex biochemical reaction mechanisms. However, analysis of CoSMoS data is intrinsically challenging because of low image signal-to-noise ratios, non-specific surface binding of the fluorescent molecules, and analysis methods that require subjective inputs to achieve accurate results. Here, we use Bayesian probabilistic programming to implement Tapqir, an unsupervised machine learning method based on a holistic, physics-based causal model of CoSMoS data. This method accounts for uncertainties in image analysis due to photon and camera noise, optical non-uniformities, non-specific binding, and spot detection. Rather than merely producing a binary “spot/no spot” classification of unspecified reliability, Tapqir objectively assigns spot classification probabilities that allow accurate downstream analysis of molecular dynamics, thermodynamics, and kinetics. We both quantitatively validate Tapqir performance against simulated CoSMoS image data with known properties and also demonstrate that it implements fully objective, automated analysis of experiment-derived data sets with a wide range of signal, noise, and non-specific binding characteristics.


2020 ◽  
Vol 10 (14) ◽  
pp. 4806 ◽  
Author(s):  
Ho-Hyoung Choi ◽  
Hyun-Soo Kang ◽  
Byoung-Ju Yun

For more than a decade, both academia and industry have focused attention on the computer vision and in particular the computational color constancy (CVCC). The CVCC is used as a fundamental preprocessing task in a wide range of computer vision applications. While our human visual system (HVS) has the innate ability to perceive constant surface colors of objects under varying illumination spectra, the computer vision is facing the color constancy challenge in nature. Accordingly, this article proposes novel convolutional neural network (CNN) architecture based on the residual neural network which consists of pre-activation, atrous or dilated convolution and batch normalization. The proposed network can automatically decide what to learn from input image data and how to pool without supervision. When receiving input image data, the proposed network crops each image into image patches prior to training. Once the network begins learning, local semantic information is automatically extracted from the image patches and fed to its novel pooling layer. As a result of the semantic pooling, a weighted map or a mask is generated. Simultaneously, the extracted information is estimated and combined to form global information during training. The use of the novel pooling layer enables the proposed network to distinguish between useful data and noisy data, and thus efficiently remove noisy data during learning and evaluating. The main contribution of the proposed network is taking CVCC to higher accuracy and efficiency by adopting the novel pooling method. The experimental results demonstrate that the proposed network outperforms its conventional counterparts in estimation accuracy.


Author(s):  
Ramgopal Kashyap

The quickly extending field of huge information examination has begun to assume a crucial part in the advancement of human services practices and research. In this chapter, challenges like gathering information from complex heterogeneous patient sources, utilizing the patient/information relationships in longitudinal records, understanding unstructured clinical notes in the correct setting and efficiently dealing with expansive volumes of medicinal imaging information, and removing conceivably valuable data is shown. Healthcare and IoT and machine learning along with data mining are also discussed. Image analysis and segmentation methods comparative study is given for the examination of computer vision, imaging handling, and example acknowledgment has gained considerable ground amid the previous quite a few years. Examiners have distributed an abundance of essential science and information reporting the advance and social insurance application on medicinal imaging.


2011 ◽  
Vol 2011 ◽  
pp. 1-7 ◽  
Author(s):  
José M. Eirín-López ◽  
Juan Ausió

The evolution of sex remains a hotly debated topic in evolutionary biology. In particular, studying the origins of the molecular mechanisms underlying sexual reproduction and gametogenesis (its fundamental component) in multicellular eukaryotes has been difficult due to the rapid divergence of many reproductive proteins, pleiotropy, and by the fact that only a very small number of reproductive proteins specifically involved in reproduction are conserved across lineages. Consequently, during the last decade, many efforts have been put into answering the following question: did gametogenesis evolve independently in different animal lineages or does it share a common evolutionary origin in a single ancestral prototype? Among the various approaches carried out in order to solve this question, the characterization of the evolution of the DAZ gene family holds much promise because these genes encode reproductive proteins that are conserved across a wide range of animal phyla. Within this family, BOULE is of special interest because it represents the most ancestral member of this gene family (the “grandfather” of DAZ). Furthermore, BOULE has attracted most of the attention since it represents an ancient male gametogenic factor with an essential reproductive-exclusive requirement in urbilaterians, constituting a core component of the reproductive prototype. Within this context, the aim of the present work is to provide an up-to-date insight into the studies that lead to the characterization of the DAZ family members and the implications in helping decipher the evolutionary origin of gametogenesis in metazoan animals.


Sign in / Sign up

Export Citation Format

Share Document