Real-Time Vegetables Recognition System based on Deep Learning Network for Agricultural Robots

Author(s):  
Yang-Yang Zheng ◽  
Jian-Lei Kong ◽  
Xue-Bo Jin ◽  
Ting-Li Su ◽  
Ming-Jun Nie ◽  
...  
Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 331
Author(s):  
Joseph Gesnouin ◽  
Steve Pechberti ◽  
Guillaume Bresson ◽  
Bogdan Stanciulescu ◽  
Fabien Moutarde

Understanding the behaviors and intentions of humans is still one of the main challenges for vehicle autonomy. More specifically, inferring the intentions and actions of vulnerable actors, namely pedestrians, in complex situations such as urban traffic scenes remains a difficult task and a blocking point towards more automated vehicles. Answering the question “Is the pedestrian going to cross?” is a good starting point in order to advance in the quest to the fifth level of autonomous driving. In this paper, we address the problem of real-time discrete intention prediction of pedestrians in urban traffic environments by linking the dynamics of a pedestrian’s skeleton to an intention. Hence, we propose SPI-Net (Skeleton-based Pedestrian Intention network): a representation-focused multi-branch network combining features from 2D pedestrian body poses for the prediction of pedestrians’ discrete intentions. Experimental results show that SPI-Net achieved 94.4% accuracy in pedestrian crossing prediction on the JAAD data set while being efficient for real-time scenarios since SPI-Net can reach around one inference every 0.25 ms on one GPU (i.e., RTX 2080ti), or every 0.67 ms on one CPU (i.e., Intel Core i7 8700K).


Author(s):  
Chinedu Godswill Olebu ◽  
Jide Julius Popoola ◽  
Michael Rotimi Adu ◽  
Yekeen Olajide Olasoji ◽  
Samson Adenle Oyetunji

In face recognition system, the accuracy of recognition is greatly affected by varying degree of illumination on both the probe and testing faces. Particularly, the changes in direction and intensity of illumination are two major contributors to varying illumination. In overcoming these challenges, different approaches had been proposed. However, the study presented in this paper proposes a novel approach that uses deep learning, in a MATLAB environment, for classification of face images under varying illumination conditions. One thousand one hundred (1100) face images employed were obtained from Yale B extended database. The obtained face images were divided into ten (10) folders. Each folder was further divided into seven (7) subsets based on different azimuthal angle of illumination used. The images obtained were filtered using a combination of linear filters and anisotropic diffusion filter. The filtered images were then segmented into light and dark zones with respect to the azimuthal and elevation angles of illumination. Eighty percent (80%) of the images in each subset which forms the training set, were used to train the deep learning network while the remaining twenty percent (20%), which forms the testing set, were used to test the accuracy of classification of the deep learning network generated. With three successive iterations, the performance evaluation results showed that the classification accuracy varies from 81.82% to 100.00%.   


2021 ◽  
Vol 11 (1) ◽  
pp. 339-348
Author(s):  
Piotr Bojarczak ◽  
Piotr Lesiak

Abstract The article uses images from Unmanned Aerial Vehicles (UAVs) for rail diagnostics. The main advantage of such a solution compared to traditional surveys performed with measuring vehicles is the elimination of decreased train traffic. The authors, in the study, limited themselves to the diagnosis of hazardous split defects in rails. An algorithm has been proposed to detect them with an efficiency rate of about 81% for defects not less than 6.9% of the rail head width. It uses the FCN-8 deep-learning network, implemented in the Tensorflow environment, to extract the rail head by image segmentation. Using this type of network for segmentation increases the resistance of the algorithm to changes in the recorded rail image brightness. This is of fundamental importance in the case of variable conditions for image recording by UAVs. The detection of these defects in the rail head is performed using an algorithm in the Python language and the OpenCV library. To locate the defect, it uses the contour of a separate rail head together with a rectangle circumscribed around it. The use of UAVs together with artificial intelligence to detect split defects is an important element of novelty presented in this work.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sign in / Sign up

Export Citation Format

Share Document