Deep Learning for Gap Crossing Ability of Ground Vehicles

Author(s):  
Benjamin S. Parsons ◽  
Jing-Ru C. Cheng
2019 ◽  
Vol 9 (18) ◽  
pp. 3789 ◽  
Author(s):  
Jiyoun Moon ◽  
Beom-Hee Lee

Natural-language-based scene understanding can enable heterogeneous robots to cooperate efficiently in large and unconstructed environments. However, studies on symbolic planning rarely consider the semantic knowledge acquisition problem associated with the surrounding environments. Further, recent developments in deep learning methods show outstanding performance for semantic scene understanding using natural language. In this paper, a cooperation framework that connects deep learning techniques and a symbolic planner for heterogeneous robots is proposed. The framework is largely composed of the scene understanding engine, planning agent, and knowledge engine. We employ neural networks for natural-language-based scene understanding to share environmental information among robots. We then generate a sequence of actions for each robot using a planning domain definition language planner. JENA-TDB is used for knowledge acquisition storage. The proposed method is validated using simulation results obtained from one unmanned aerial and three ground vehicles.


Author(s):  
Niall O'Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Daniel Riordan ◽  
Joseph Walsh ◽  
...  

Agronomy ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 347
Author(s):  
Anand Koirala ◽  
Kerry B. Walsh ◽  
Zhenglin Wang

Machine vision from ground vehicles is being used for estimation of fruit load on trees, but a correction is required for occlusion by foliage or other fruits. This requires a manually estimated factor (the reference method). It was hypothesised that canopy images could hold information related to the number of occluded fruits. Several image features, such as the proportion of fruit that were partly occluded, were used in training Random forest and multi-layered perceptron (MLP) models for estimation of a correction factor per tree. In another approach, deep learning convolutional neural networks (CNNs) were directly trained against harvest count of fruit per tree. A R2 of 0.98 (n = 98 trees) was achieved for the correlation of fruit count predicted by a Random forest model and the ground truth fruit count, compared to a R2 of 0.68 for the reference method. Error on prediction of whole orchard (880 trees) fruit load compared to packhouse count was 1.6% for the MLP model and 13.6% for the reference method. However, the performance of these models on data of another season was at best equivalent and generally poorer than the reference method. This result indicates that training on one season of data was insufficient for the development of a robust model.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251339
Author(s):  
Qian Xu ◽  
Gang Wang ◽  
Ying Li ◽  
Ling Shi ◽  
Yaxin Li

Unmanned ground vehicles (UGVs) are an important research application of artificial intelligence. In particular, the deep learning-based object detection method is widely used in UGV-based environmental perception. Good experimental results are achieved by the deep learning-based object detection method Faster region-based convolutional neural network (Faster R-CNN). However, the exploration space of the region proposal network (RPN) is restricted by its expression. In our paper, a boosted RPN (BRPN) with three improvements is developed to solve this problem. First, a novel enhanced pooling network is designed in this paper. Therefore, the BRPN can adapt to objects with different shapes. Second, the expression of BRPN loss function is improved to learn the negative samples. Furthermore, the grey wolf optimizer (GWO) is used to optimize the parameters of the improved BRPN loss function. Thereafter, the performance of the BRPN loss function is promoted. Third, a novel GA-SVM classifier is applied to strengthen the classification capacity. The PASCAL VOC 2007, VOC 2012 and KITTI datasets are used to test the BRPN. Consequently, excellent experimental results are obtained by our deep learning-based object detection method.


Author(s):  
Stellan Ohlsson
Keyword(s):  

2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


2020 ◽  
Author(s):  
L Pennig ◽  
L Lourenco Caldeira ◽  
C Hoyer ◽  
L Görtz ◽  
R Shahzad ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document