scholarly journals A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping

Plant Methods ◽  
2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Michael Henke ◽  
Astrid Junker ◽  
Kerstin Neumann ◽  
Thomas Altmann ◽  
Evgeny Gladilin
Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4550
Author(s):  
Huajian Liu ◽  
Brooke Bruning ◽  
Trevor Garnett ◽  
Bettina Berger

The accurate and high throughput quantification of nitrogen (N) content in wheat using non-destructive methods is an important step towards identifying wheat lines with high nitrogen use efficiency and informing agronomic management practices. Among various plant phenotyping methods, hyperspectral sensing has shown promise in providing accurate measurements in a fast and non-destructive manner. Past applications have utilised non-imaging instruments, such as spectrometers, while more recent approaches have expanded to hyperspectral cameras operating in different wavelength ranges and at various spectral resolutions. However, despite the success of previous hyperspectral applications, some important research questions regarding hyperspectral sensors with different wavelength centres and bandwidths remain unanswered, limiting wide application of this technology. This study evaluated the capability of hyperspectral imaging and non-imaging sensors to estimate N content in wheat leaves by comparing three hyperspectral cameras and a non-imaging spectrometer. This study answered the following questions: (1) How do hyperspectral sensors with different system setups perform when conducting proximal sensing of N in wheat leaves and what aspects have to be considered for optimal results? (2) What types of photonic detectors are most sensitive to N in wheat leaves? (3) How do the spectral resolutions of different instruments affect N measurement in wheat leaves? (4) What are the key-wavelengths with the highest correlation to N in wheat? Our study demonstrated that hyperspectral imaging systems with satisfactory system setups can be used to conduct proximal sensing of N content in wheat with sufficient accuracy. The proposed approach could reduce the need for chemical analysis of leaf tissue and lead to high-throughput estimation of N in wheat. The methodologies here could also be validated on other plants with different characteristics. The results can provide a reference for users wishing to measure N content at either plant- or leaf-scales using hyperspectral sensors.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


2021 ◽  
Vol 13 (7) ◽  
pp. 1380
Author(s):  
Sébastien Dandrifosse ◽  
Alexis Carlier ◽  
Benjamin Dumont ◽  
Benoît Mercatoris

Multimodal images fusion has the potential to enrich the information gathered by multi-sensor plant phenotyping platforms. Fusion of images from multiple sources is, however, hampered by the technical lock of image registration. The aim of this paper is to provide a solution to the registration and fusion of multimodal wheat images in field conditions and at close range. Eight registration methods were tested on nadir wheat images acquired by a pair of red, green and blue (RGB) cameras, a thermal camera and a multispectral camera array. The most accurate method, relying on a local transformation, aligned the images with an average error of 2 mm but was not reliable for thermal images. More generally, the suggested registration method and the preprocesses necessary before fusion (plant mask erosion, pixel intensity averaging) would depend on the application. As a consequence, the main output of this study was to identify four registration-fusion strategies: (i) the REAL-TIME strategy solely based on the cameras’ positions, (ii) the FAST strategy suitable for all types of images tested, (iii) and (iv) the ACCURATE and HIGHLY ACCURATE strategies handling local distortion but unable to deal with images of very different natures. These suggestions are, however, limited to the methods compared in this study. Further research should investigate how recent cutting-edge registration methods would perform on the specific case of wheat canopy.


2017 ◽  
Vol 114 (13) ◽  
pp. 3393-3396 ◽  
Author(s):  
Narangerel Altangerel ◽  
Gombojav O. Ariunbold ◽  
Connor Gorman ◽  
Masfer H. Alkahtani ◽  
Eli J. Borrego ◽  
...  

Development of a phenotyping platform capable of noninvasive biochemical sensing could offer researchers, breeders, and producers a tool for precise response detection. In particular, the ability to measure plant stress in vivo responses is becoming increasingly important. In this work, a Raman spectroscopic technique is developed for high-throughput stress phenotyping of plants. We show the early (within 48 h) in vivo detection of plant stress responses. Coleus (Plectranthus scutellarioides) plants were subjected to four common abiotic stress conditions individually: high soil salinity, drought, chilling exposure, and light saturation. Plants were examined poststress induction in vivo, and changes in the concentration levels of the reactive oxygen-scavenging pigments were observed by Raman microscopic and remote spectroscopic systems. The molecular concentration changes were further validated by commonly accepted chemical extraction (destructive) methods. Raman spectroscopy also allows simultaneous interrogation of various pigments in plants. For example, we found a unique negative correlation in concentration levels of anthocyanins and carotenoids, which clearly indicates that plant stress response is fine-tuned to protect against stress-induced damages. This precision spectroscopic technique holds promise for the future development of high-throughput screening for plant phenotyping and the quantification of biologically or commercially relevant molecules, such as antioxidants and pigments.


Author(s):  
M. Herrero-Huerta ◽  
V. Meline ◽  
A. S. Iyer-Pascuzzi ◽  
A. M. Souza ◽  
M. R. Tuinstra ◽  
...  

Abstract. Breakthrough imaging technologies are a potential solution to the plant phenotyping bottleneck in marker-assisted breeding and genetic mapping. X-Ray CT (computed tomography) technology is able to acquire the digital twin of root system architecture (RSA), however, advances in computational methods to digitally model spatial disposition of root system networks are urgently required.We extracted the root skeleton of the digital twin based on 3D data from X-ray CT, which is optimized for high-throughput and robust results. Significant root architectural traits such as number, length, growth angle, elongation rate and branching map can be easily extracted from the skeleton. The curve-skeleton extraction is computed based on a constrained Laplacian smoothing algorithm. This skeletal structure drives the registration procedure in temporal series. The experiment was carried out at the Ag Alumni Seed Phenotyping Facility (AAPF) at Purdue University in West Lafayette (IN, USA). Three samples of tomato root at 2 different times and three samples of corn root at 3 different times were scanned. The skeleton is able to accurately match the shape of the RSA based on a visual inspection.The results based on a visual inspection confirm the feasibility of the proposed methodology, providing scalability to a comprehensive analysis to high throughput root phenotyping.


Author(s):  
Aditya Pratap ◽  
Rakhi Tomar ◽  
Jitendra Kumar ◽  
Vankat Raman Pandey ◽  
Suhel Mehandi ◽  
...  

Agronomy ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 175 ◽  
Author(s):  
Orly Enrique Apolo-Apolo ◽  
Manuel Pérez-Ruiz ◽  
Jorge Martínez-Guanter ◽  
Gregorio Egea

Remote and non-destructive estimation of leaf area index (LAI) has been a challenge in the last few decades as the direct and indirect methods available are laborious and time-consuming. The recent emergence of high-throughput plant phenotyping platforms has increased the need to develop new phenotyping tools for better decision-making by breeders. In this paper, a novel model based on artificial intelligence algorithms and nadir-view red green blue (RGB) images taken from a terrestrial high throughput phenotyping platform is presented. The model mixes numerical data collected in a wheat breeding field and visual features extracted from the images to make rapid and accurate LAI estimations. Model-based LAI estimations were validated against LAI measurements determined non-destructively using an allometric relationship obtained in this study. The model performance was also compared with LAI estimates obtained by other classical indirect methods based on bottom-up hemispherical images and gaps fraction theory. Model-based LAI estimations were highly correlated with ground-truth LAI. The model performance was slightly better than that of the hemispherical image-based method, which tended to underestimate LAI. These results show the great potential of the developed model for near real-time LAI estimation, which can be further improved in the future by increasing the dataset used to train the model.


Sign in / Sign up

Export Citation Format

Share Document