An overview of machine vision technologies for agricultural robots and automation

Author(s):  
John Billingsley ◽  
Author(s):  
Reza Rahmadian ◽  
Mahendra Widyartono

Technology in the modern day has led to the development of agricultural robots that helps to increase the agriculture productivity. Numerous research has been conducted to help increasing the capability of the robot in assisting agricultural operation, which leads to development of autonomous robot. The development aim is to help reducing agriculture’s dependency on operators, workers, also reducing the inaccuracy caused by human errors. There are two important development components for autonomous harvesting. The first component is Machine vision for detecting the crops and guiding the robot through the field and the second component actuator to grab or picking the crops or fruits.


Author(s):  
Sebastian Haug ◽  
Jörn Ostermann

Small size agricultural robots which are capable of sensing and manipulating the field environment are a promising approach towards more ecological, sustainable and human-friendly agriculture. This chapter proposes a machine vision approach for plant classification in the field and discusses its possible application in the context of robot based precision agriculture. The challenges of machine vision in the field are discussed at the example of plant classification for weed control. Automatic crop/weed discrimination enables new weed control strategies where single weed plants are treated individually. System development and evaluation are done using a dataset of images captured in a commercial organic carrot farm with the autonomous field robot Bonirob under field conditions. Results indicate plant classification performance with 93% average accuracy.


Author(s):  
Reza Rahmadian ◽  
Mahendra Widyartono

Interest on robotic agriculture system has led to the development of agricultural robots that helps to improve the farming operation and increase the agriculture productivity. Much research has been conducted to increase the capability of the robot to assist agricultural operation, which leads to development of autonomous robot. This development provides a means of reducing agriculture’s dependency on operators, workers, also reducing the inaccuracy caused by human errors. There are two important development components for autonomous navigation. The first component is Machine vision for guiding through the crops and the second component is GPS technology to guide the robot through the agricultural fields.


Author(s):  
Wesley E. Snyder ◽  
Hairong Qi
Keyword(s):  

2018 ◽  
pp. 143-149 ◽  
Author(s):  
Ruijie CHENG

In order to further improve the energy efficiency of classroom lighting, a classroom lighting energy saving control system based on machine vision technology is proposed. Firstly, according to the characteristics of machine vision design technology, a quantum image storage model algorithm is proposed, and the Back Propagation neural network algorithm is used to analyze the technology, and a multi­feedback model for energy­saving control of classroom lighting is constructed. Finally, the algorithm and lighting model are simulated. The test results show that the design of this paper can achieve the optimization of the classroom lighting control system, different number of signals can comprehensively control the light and dark degree of the classroom lights, reduce the waste of resources of classroom lighting, and achieve the purpose of energy saving and emission reduction. Technology is worth further popularizing in practice.


1997 ◽  
Vol 117 (10) ◽  
pp. 1339-1344
Author(s):  
Katsuhiko Sakaue ◽  
Hiroyasu Koshimizu
Keyword(s):  

2005 ◽  
Vol 125 (11) ◽  
pp. 692-695
Author(s):  
Kazunori UMEDA ◽  
Yoshimitsu AOKI
Keyword(s):  

Fast track article for IS&T International Symposium on Electronic Imaging 2020: Stereoscopic Displays and Applications proceedings.


2020 ◽  
Vol 64 (5) ◽  
pp. 50411-1-50411-8
Author(s):  
Hoda Aghaei ◽  
Brian Funt

Abstract For research in the field of illumination estimation and color constancy, there is a need for ground-truth measurement of the illumination color at many locations within multi-illuminant scenes. A practical approach to obtaining such ground-truth illumination data is presented here. The proposed method involves using a drone to carry a gray ball of known percent surface spectral reflectance throughout a scene while photographing it frequently during the flight using a calibrated camera. The captured images are then post-processed. In the post-processing step, machine vision techniques are used to detect the gray ball within each frame. The camera RGB of light reflected from the gray ball provides a measure of the illumination color at that location. In total, the dataset contains 30 scenes with 100 illumination measurements on average per scene. The dataset is available for download free of charge.


Sign in / Sign up

Export Citation Format

Share Document