scholarly journals Raspberry Pi Powered Imaging for Plant Phenotyping

2017 ◽  
Author(s):  
Jose C. Tovar ◽  
J. Steen Hoyer ◽  
Andy Lin ◽  
Allison Tielking ◽  
Monica Tessman ◽  
...  

ABSTRACTPremise of the study: Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data.Methods and Results: We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (shape, area, height, color) en masse using open-source image processing software such as PlantCV.Conclusion: This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.

2016 ◽  
Vol 67 (11) ◽  
pp. 3587-3599 ◽  
Author(s):  
Avi C. Knecht ◽  
Malachy T. Campbell ◽  
Adam Caprez ◽  
David R. Swanson ◽  
Harkamal Walia

2016 ◽  
Author(s):  
Maxime Leblanc-Latour ◽  
Craig Bryan ◽  
Andrew Pelling

ABSTRACTOpen-source lab equipment is becoming more widespread with the popularization of fabrication tools such as 3d-printers, laser cutters, CNC machines, open source microcontrollers and open source software. Although many pieces of common laboratory equipment have been developed, software control of these items is sometimes lacking. Specifically, control software that can be easily implemented and enable user-input and control over multiple platforms (PC, smartphone, web, etc.). The aim of this proof-of-principle study was to develop and implement software for the control of a low-cost, 3d-printed microscope. Here, we present two approaches, which enable microscope control by exploiting the functionality of the social media platform Twitter or player actions inside of the videogame Minecraft. The microscope was constructed from a modified web-camera and implemented on a Raspberry Pi computer. Four aspects of microscope control were tested, including single image capture, focus control and time-lapse imaging. The Twitter-embodiment enabled users to send “tweets” directly to the microscope. Image data acquired by the microscope was then returned to the user through a Twitter reply and stored permanently on the photo-sharing platform Flickr, along with any relevant metadata. Local control of the microscope was also implemented by utilizing the video game Minecraft, in situations where Internet connectivity is not present or stable. A virtual laboratory was constructed inside the Minecraft world and player actions inside the laboratory were linked to specific microscope functions. Here, we present the methodology and results of these experiments and discuss possible limitations and future extensions of this work.


2019 ◽  
Author(s):  
Cedar Warman ◽  
John E Fowler

AbstractHigh-throughput phenotyping systems are becoming increasingly powerful, dramatically changing our ability to document, measure, and detect phenomena. Unfortunately, taking advantage of these trends can be difficult for scientists with few resources, particularly when studying nonstandard biological systems. Here, we describe a powerful, cost-effective combination of a custom-built imaging platform and open-source image processing pipeline. Our maize ear scanner was built with off-the-shelf parts for <$80. When combined with a cellphone or digital camera, videos of rotating maize ears were captured and digitally flattened into projections covering the entire surface of the ear. Segregating GFP and anthocyanin seed markers were clearly distinguishable in ear projections, allowing manual annotation using ImageJ. Using this method, statistically powerful transmission data can be collected for hundreds of maize ears, accelerating the phenotyping process.


2017 ◽  
Vol 3 ◽  
pp. e139
Author(s):  
Maxime Leblanc-Latour ◽  
Craig Bryan ◽  
Andrew E. Pelling

Open-source lab equipment is becoming more widespread with the popularization of fabrication tools such as 3D printers, laser cutters, CNC machines, open source microcontrollers and open source software. Although many pieces of common laboratory equipment have been developed, software control of these items is sometimes lacking. Specifically, control software that can be easily implemented and enable user-input and control over multiple platforms (PC, smartphone, web, etc.). The aim of this proof-of principle study was to develop and implement software for the control of a low-cost, 3D printed microscope. Here, we present two approaches which enable microscope control by exploiting the functionality of the social media platform Twitter or player actions inside of the videogame Minecraft. The microscope was constructed from a modified web-camera and implemented on a Raspberry Pi computer. Three aspects of microscope control were tested, including single image capture, focus control and time-lapse imaging. The Twitter embodiment enabled users to send ‘tweets’ directly to the microscope. Image data acquired by the microscope was then returned to the user through a Twitter reply and stored permanently on the photo-sharing platform Flickr, along with any relevant metadata. Local control of the microscope was also implemented by utilizing the video game Minecraft, in situations where Internet connectivity is not present or stable. A virtual laboratory was constructed inside the Minecraft world and player actions inside the laboratory were linked to specific microscope functions. Here, we present the methodology and results of these experiments and discuss possible limitations and future extensions of this work.


Author(s):  
T. Hu ◽  
J. Fan ◽  
H. He ◽  
L. Qin ◽  
G. Li

To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e4088 ◽  
Author(s):  
Malia A. Gehan ◽  
Noah Fahlgren ◽  
Arash Abbasi ◽  
Jeffrey C. Berry ◽  
Steven T. Callen ◽  
...  

Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.


2019 ◽  
Vol 12 (1) ◽  
pp. 56-64
Author(s):  
Ilfan Sugianda ◽  
Thamrin Thamrin

KRSBI Wheeled is One of the competitions on the Indonesian Robot Contest,. It is a football match that plays 3 robot full autonomous versus other teams. The robot uses a drive in the form of wheels that are controlled in such a way, to be able to do the work the robot uses a camera sensor mounted on the front of the robot, while for movement in the paper author uses 3 omni wheel so the robot can move in all directions to make it easier towards the ball object. For the purposes of image processing and input and output processing the author uses a Single Board Computer Raspberry PI 3 are programmed using the Python programming language with OpenCV image processing library, to optimize the work of Single Board Computer(SBC) Raspberry PI 3 Mini PC assisted by the Microcontroller Arduino Mega 2560. Both devices are connected serially via the USB port. Raspberry PI will process the image data obtained webcam camera input. Next, If the ball object can be detected the object's position coordinates will be encoded in character and sent to the Microcontroller Arduino Mega 2560. Furthermore, Arduino mega 2560 will process data to drive the motors so that can move towards the position of the ball object. Based on the data from the maximum distance test results that can be read by the camera sensor to be able to detect a ball object is �5 meters with a maximum viewing angle of 120 �.


2021 ◽  
Vol 58 (8) ◽  
pp. 484-506
Author(s):  
U. P. Nayak ◽  
M. Müller ◽  
D. Britz ◽  
M.A. Guitar ◽  
F. Mücklich

Abstract Considering the dependance of materials’ properties on the microstructure, it is imperative to carry out a thorough microstructural characterization and analysis to bolster its development. This article is aimed to inform the users about the implementation of FIJI, an open source image processing software for image segmentation and quantitative microstructural analysis. The rapid advancement of computer technology in the past years has made it possible to swiftly segment and analyze hundreds of micrographs reducing hours’ worth of analysis time to a mere matter of minutes. This has led to the availability of several commercial image processing software programs primarily aimed at relatively inexperienced users. Despite the advantages like ‘one-click solutions’ offered by commercial software, the high licensing cost limits its widespread use in the metallographic community. Open-source platforms on the other hand, are free and easily available although rudimentary knowledge of the user-interface is a pre-requisite. In particular, the software FIJI has distinguished itself as a versatile tool, since it provides suitable extensions from image processing to segmentation to quantitative stereology and is continuously developed by a large user community. This article aims to introduce the FIJI program by familiarizing the user with its graphical user-interface and providing a sequential methodology to carry out image segmentation and quantitative microstructural analysis.


Sign in / Sign up

Export Citation Format

Share Document