plant image
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 34)

H-INDEX

7
(FIVE YEARS 3)

2021 ◽  
Vol 9 (2) ◽  
pp. 283-293
Author(s):  
Hema M S ◽  
†, Niteesha Sharma ◽  
Y Sowjanya ◽  
Ch. Santoshini ◽  
R Sri Durga ◽  
...  

Every year India losses the significant amount of annual crop yield due to unidentified plant diseases. The traditional method of disease detection is manual examination by either farmers or experts, which may be time-consuming and inaccurate. It is proving infeasible for many small and medium-sized farms around the world. To mitigate this issue, computer aided disease recognition model is proposed. It uses leaf image classification with the help of deep convolutional networks. In this paper, VGG16 and Resnet34 CNN was proposed to detect the plant disease. It has three processing steps namely feature extraction, downsizing image and classification. In CNN, the convolutional layer extracts the feature from plant image. The pooling layer downsizing the image. The disease classification was done in dense layer. The proposed model can recognize 38 differing types of plant diseases out of 14 different plants with the power to differentiate plant leaves from their surroundings. The performance of VGG16 and Resnet34 was compared.  The accuracy, sensitivity and specificity was taken as performance Metrix. It helps to give personalized recommendations to the farmers based on soil features, temperature and humidity


Biology ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1140
Author(s):  
Woohyuk Jang ◽  
Eui Chul Lee

Owing to climate change and human indiscriminate development, the population of endangered species has been decreasing. To protect endangered species, many countries worldwide have adopted the CITES treaty to prevent the extinction of endangered plants and animals. Moreover, research has been conducted using diverse approaches, particularly deep learning-based animal and plant image recognition methods. In this paper, we propose an automated image classification method for 11 endangered parrot species included in CITES. The 11 species include subspecies that are very similar in appearance. Data images were collected from the Internet and built in cooperation with Seoul Grand Park Zoo to build an indigenous database. The dataset for deep learning training consisted of 70% training set, 15% validation set, and 15% test set. In addition, a data augmentation technique was applied to reduce the data collection limit and prevent overfitting. The performance of various backbone CNN architectures (i.e., VGGNet, ResNet, and DenseNet) were compared using the SSD model. The experiment derived the test set image performance for the training model, and the results show that the DenseNet18 had the best performance with an mAP of approximately 96.6% and an inference time of 0.38 s.


Agriculture ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1098
Author(s):  
Michael Henke ◽  
Kerstin Neumann ◽  
Thomas Altmann ◽  
Evgeny Gladilin

Background. Efficient analysis of large image data produced in greenhouse phenotyping experiments is often challenged by a large variability of optical plant and background appearance which requires advanced classification model methods and reliable ground truth data for their training. In the absence of appropriate computational tools, generation of ground truth data has to be performed manually, which represents a time-consuming task. Methods. Here, we present a efficient GUI-based software solution which reduces the task of plant image segmentation to manual annotation of a small number of image regions automatically pre-segmented using k-means clustering of Eigen-colors (kmSeg). Results. Our experimental results show that in contrast to other supervised clustering techniques k-means enables a computationally efficient pre-segmentation of large plant images in their original resolution. Thereby, the binary segmentation of plant images in fore- and background regions is performed within a few minutes with the average accuracy of 96–99% validated by a direct comparison with ground truth data. Conclusions. Primarily developed for efficient ground truth segmentation and phenotyping of greenhouse-grown plants, the kmSeg tool can be applied for efficient labeling and quantitative analysis of arbitrary images exhibiting distinctive differences between colors of fore- and background structures.


2021 ◽  
Author(s):  
Jorge Alberto Gutierrez Ortega ◽  
Noah Fahlgren ◽  
Malia Gehan
Keyword(s):  

2021 ◽  
Author(s):  
Jorge Alberto Gutierrez Ortega ◽  
S. Elizabeth Castillo ◽  
Malia Gehan ◽  
Noah Fahlgren
Keyword(s):  

2021 ◽  
pp. 393-402
Author(s):  
Min Li

In this paper, aiming at the need of stable access to visual information of intelligent management of greenhouse tomatoes, the color correction method of tomato plant image based on high dynamic range imaging technology was studied, in order to overcome the objective limitation of complex natural lighting conditions on the stable color presentation of working objects. In view of the color distortion caused by the temporal and spatial fluctuation of illumination in greenhouse and sudden change of radiation intensity in complex background, a calibration method of camera radiation response model based on multi-exposure intensity images is proposed. The fusion effect of multi band image is evaluated by field test. The results show that after multi band image fusion processing, the brightness difference between the recognized target and other near color background is significantly enhanced, and the brightness fluctuation of the background is suppressed. The color correction method was verified by field experiments, and the gray information, discrete degree and clarity of tomato plant images in different scenes and periods were improved.


2021 ◽  
Vol 1988 (1) ◽  
pp. 012034
Author(s):  
Suhaila Abd Halim ◽  
Syafiqah Md Lazim

2021 ◽  
Vol 8 ◽  
Author(s):  
Sufola Das Chagas Silva Araujo ◽  
V. S. Malemath ◽  
K. Meenakshi Sundaram

Instinctive detection of infections by carefully inspecting the signs on the plant leaves is an easier and economic way to diagnose different plant leaf diseases. This defines a way in which symptoms of diseased plants are detected utilizing the concept of feature learning (Sulistyo et al., 2020). The physical method of detecting and analyzing diseases takes a lot of time and has chances of making many errors (Sulistyo et al., 2020). So a method has been developed to identify the symptoms by just acquiring the chili plant leaf image. The methodology used involves image database, extracting the region of interest, training and testing images, symptoms/features extraction of the plant image using moments, building of the symptom vector feature dataset, and finding the correlation and similarity between different symptoms of the plant (Sulistyo et al., 2020). This will detect different diseases of the plant.


Author(s):  
Eusun Han ◽  
Abraham George Smith ◽  
Roman Kemper ◽  
Rosemary White ◽  
John Kirkegaard ◽  
...  

Abstract The scale of root quantification in research is often limited by the time required for sampling, measurement and processing samples. Recent developments in Convolutional Neural Networks (CNN) have made faster and more accurate plant image analysis possible which may significantly reduce the time required for root measurement, but challenges remain in making these methods accessible to researchers without an in-depth knowledge of Machine Learning. We analyzed root images acquired from three destructive root samplings using the RootPainter CNN-software that features an interface for corrective annotation for easier use. Root scans with and without non-root debris were used to test if training a model, i.e., learning from labeled examples, can effectively exclude the debris by comparing the end-results with measurements from clean images. Root images acquired from soil profile walls and the cross-section of soil cores were also used for training and the derived measurements were compared with manual measurements. After 200 minutes of training on each dataset, significant relationships between manual measurements and RootPainter-derived data were noted for monolith (R 2=0.99), profile wall (R 2=0.76) and core-break (R 2=0.57). The rooting density derived from images with debris was not significantly different from that derived from clean images after processing with RootPainter. Rooting density was also successfully calculated from both profile wall and soil core images, and in each case the gradient of root density with depth was not significantly different from manual counts. Differences in root-length density (RLD: cm cm -3) between crops with contrasting root systems were captured using automatic segmentation at soil profiles with high RLD (1 to 5 cm cm -3) as well as at low RLD (0.1 to 0.3 cm cm -3). Our results demonstrate that the proposed approach using CNN can lead to substantial reductions in root sample processing workloads, increasing the potential scale of future root investigations.


Sign in / Sign up

Export Citation Format

Share Document