scholarly journals MAPPER: A new image analysis pipeline unmasks differential regulation of Drosophila wing features

2020 ◽  
Author(s):  
Nilay Kumar ◽  
Francisco Huizar ◽  
Trent Robinett ◽  
Keity J. Farfán-Pira ◽  
Dharsan Soundarrajan ◽  
...  

SummaryPhenomics requires quantification of large volumes of image data, necessitating high throughput image processing approaches. Existing image processing pipelines for Drosophila wings, a powerful model for studying morphogenesis, are limited in speed, versatility, and precision. To overcome these limitations, we developed MAPPER, a fully-automated machine learning-based pipeline that quantifies high dimensional phenotypic signatures, with each dimension representing a unique morphological feature. MAPPER magnifies the power of Drosophila genetics by rapidly identifying subtle phenotypic differences in sample populations. To demonstrate its widespread utility, we used MAPPER to reveal new insights connecting patterning and growth across Drosophila genotypes and species. The morphological features extracted using MAPPER identified the presence of a uniform scaling of proximal-distal axis length across four different species of Drosophila. Observation of morphological features extracted by MAPPER from Drosophila wings by modulating insulin signaling pathway activity revealed the presence of a scaling gradient across the anterior-posterior axis. Additionally, batch processing of samples with MAPPER revealed a key function for the mechanosensitive calcium channel, Piezo, in regulating bilateral symmetry and robust organ growth. MAPPER is an open source tool for rapid analysis of large volumes of imaging data. Overall, MAPPER provides new capabilities to rigorously and systematically identify genotype-to-phenotype relationships in an automated, high throughput fashion.Graphical abstract

Satellite observing systems are producing image observations of the Earth’s surface and atmosphere with spectral and spatial resolutions that result in data rates that current general-purpose computing systems are incapable of processing and analysing. As a result, current processing systems have been able to analyse only limited amounts of image data with less than optimal algorithms for generating high-quality geophysical parameters. A massively parallel processor (mpp) is operationally available at NASA/GSFC for routine image-analysis applications. Research studies with the mpp are being pursued in the area of interactive spatial contextual classifications for the land thematic mapper data, automatic SIR-B stereo terrain mapping, icemotion detection, faint-object image restoration and other general purpose ocean and land image-processing systems. Several applications are presented comparing the mpp products with enhancements of imaging data with standard image-processing methods. Finally, a work-station parallel processor for space station on-board image processing will be described.


2018 ◽  
Vol 170 ◽  
pp. 01018 ◽  
Author(s):  
Van Esch Patrick ◽  
Mutti Paolo ◽  
Ruiz-Martinez Emilio ◽  
Abad Garcia Estefania ◽  
Mosconi Marita ◽  
...  

It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable “neutron impact” data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.


2017 ◽  
Author(s):  
Jose C. Tovar ◽  
J. Steen Hoyer ◽  
Andy Lin ◽  
Allison Tielking ◽  
Monica Tessman ◽  
...  

ABSTRACTPremise of the study: Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data.Methods and Results: We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (shape, area, height, color) en masse using open-source image processing software such as PlantCV.Conclusion: This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.


Author(s):  
Klaus-Ruediger Peters

Differential hysteresis processing is a new image processing technology that provides a tool for the display of image data information at any level of differential contrast resolution. This includes the maximum contrast resolution of the acquisition system which may be 1,000-times higher than that of the visual system (16 bit versus 6 bit). All microscopes acquire high precision contrasts at a level of <0.01-25% of the acquisition range in 16-bit - 8-bit data, but these contrasts are mostly invisible or only partially visible even in conventionally enhanced images. The processing principle of the differential hysteresis tool is based on hysteresis properties of intensity variations within an image.Differential hysteresis image processing moves a cursor of selected intensity range (hysteresis range) along lines through the image data reading each successive pixel intensity. The midpoint of the cursor provides the output data. If the intensity value of the following pixel falls outside of the actual cursor endpoint values, then the cursor follows the data either with its top or with its bottom, but if the pixels' intensity value falls within the cursor range, then the cursor maintains its intensity value.


Author(s):  
B. Roy Frieden

Despite the skill and determination of electro-optical system designers, the images acquired using their best designs often suffer from blur and noise. The aim of an “image enhancer” such as myself is to improve these poor images, usually by digital means, such that they better resemble the true, “optical object,” input to the system. This problem is notoriously “ill-posed,” i.e. any direct approach at inversion of the image data suffers strongly from the presence of even a small amount of noise in the data. In fact, the fluctuations engendered in neighboring output values tend to be strongly negative-correlated, so that the output spatially oscillates up and down, with large amplitude, about the true object. What can be done about this situation? As we shall see, various concepts taken from statistical communication theory have proven to be of real use in attacking this problem. We offer below a brief summary of these concepts.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 356
Author(s):  
Shubham Mahajan ◽  
Akshay Raina ◽  
Xiao-Zhi Gao ◽  
Amit Kant Pandit

Plant species recognition from visual data has always been a challenging task for Artificial Intelligence (AI) researchers, due to a number of complications in the task, such as the enormous data to be processed due to vast number of floral species. There are many sources from a plant that can be used as feature aspects for an AI-based model, but features related to parts like leaves are considered as more significant for the task, primarily due to easy accessibility, than other parts like flowers, stems, etc. With this notion, we propose a plant species recognition model based on morphological features extracted from corresponding leaves’ images using the support vector machine (SVM) with adaptive boosting technique. This proposed framework includes the pre-processing, extraction of features and classification into one of the species. Various morphological features like centroid, major axis length, minor axis length, solidity, perimeter, and orientation are extracted from the digital images of various categories of leaves. In addition to this, transfer learning, as suggested by some previous studies, has also been used in the feature extraction process. Various classifiers like the kNN, decision trees, and multilayer perceptron (with and without AdaBoost) are employed on the opensource dataset, FLAVIA, to certify our study in its robustness, in contrast to other classifier frameworks. With this, our study also signifies the additional advantage of 10-fold cross validation over other dataset partitioning strategies, thereby achieving a precision rate of 95.85%.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 816
Author(s):  
Pingping Liu ◽  
Xiaokang Yang ◽  
Baixin Jin ◽  
Qiuzhan Zhou

Diabetic retinopathy (DR) is a common complication of diabetes mellitus (DM), and it is necessary to diagnose DR in the early stages of treatment. With the rapid development of convolutional neural networks in the field of image processing, deep learning methods have achieved great success in the field of medical image processing. Various medical lesion detection systems have been proposed to detect fundus lesions. At present, in the image classification process of diabetic retinopathy, the fine-grained properties of the diseased image are ignored and most of the retinopathy image data sets have serious uneven distribution problems, which limits the ability of the network to predict the classification of lesions to a large extent. We propose a new non-homologous bilinear pooling convolutional neural network model and combine it with the attention mechanism to further improve the network’s ability to extract specific features of the image. The experimental results show that, compared with the most popular fundus image classification models, the network model we proposed can greatly improve the prediction accuracy of the network while maintaining computational efficiency.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
◽  
Elmar Kotter ◽  
Luis Marti-Bonmati ◽  
Adrian P. Brady ◽  
Nandita M. Desouza

AbstractBlockchain can be thought of as a distributed database allowing tracing of the origin of data, and who has manipulated a given data set in the past. Medical applications of blockchain technology are emerging. Blockchain has many potential applications in medical imaging, typically making use of the tracking of radiological or clinical data. Clinical applications of blockchain technology include the documentation of the contribution of different “authors” including AI algorithms to multipart reports, the documentation of the use of AI algorithms towards the diagnosis, the possibility to enhance the accessibility of relevant information in electronic medical records, and a better control of users over their personal health records. Applications of blockchain in research include a better traceability of image data within clinical trials, a better traceability of the contributions of image and annotation data for the training of AI algorithms, thus enhancing privacy and fairness, and potentially make imaging data for AI available in larger quantities. Blockchain also allows for dynamic consenting and has the potential to empower patients and giving them a better control who has accessed their health data. There are also many potential applications of blockchain technology for administrative purposes, like keeping track of learning achievements or the surveillance of medical devices. This article gives a brief introduction in the basic technology and terminology of blockchain technology and concentrates on the potential applications of blockchain in medical imaging.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Dominik Jens Elias Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

Abstract Background Deep learning contributes to uncovering molecular and cellular processes with highly performant algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application. Results We have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables researchers with a basic computational background to apply debugged and benchmarked state-of-the-art deep learning algorithms to their own data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows assessing the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible and well documented. Conclusions With InstantDL, we hope to empower biomedical researchers to conduct reproducible image processing with a convenient and easy-to-use pipeline.


Sign in / Sign up

Export Citation Format

Share Document