scholarly journals Multimodal Open-Domain Conversations with the Nao Robot

Author(s):  
Kristiina Jokinen ◽  
Graham Wilcock
Keyword(s):  
2013 ◽  
Vol 24 (5) ◽  
pp. 1051-1060 ◽  
Author(s):  
Fei CHEN ◽  
Yi-Qun LIU ◽  
Chao WEI ◽  
Yun-Liang ZHANG ◽  
Min ZHANG ◽  
...  

2021 ◽  
Vol 33 (3) ◽  
pp. 033606
Author(s):  
Elena V. Shulepova ◽  
Mikhail A. Sheremet ◽  
Hakan F. Oztop

Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 66
Author(s):  
Rahee Walambe ◽  
Aboli Marathe ◽  
Ketan Kotecha

Object detection in uncrewed aerial vehicle (UAV) images has been a longstanding challenge in the field of computer vision. Specifically, object detection in drone images is a complex task due to objects of various scales such as humans, buildings, water bodies, and hills. In this paper, we present an implementation of ensemble transfer learning to enhance the performance of the base models for multiscale object detection in drone imagery. Combined with a test-time augmentation pipeline, the algorithm combines different models and applies voting strategies to detect objects of various scales in UAV images. The data augmentation also presents a solution to the deficiency of drone image datasets. We experimented with two specific datasets in the open domain: the VisDrone dataset and the AU-AIR Dataset. Our approach is more practical and efficient due to the use of transfer learning and two-level voting strategy ensemble instead of training custom models on entire datasets. The experimentation shows significant improvement in the mAP for both VisDrone and AU-AIR datasets by employing the ensemble transfer learning method. Furthermore, the utilization of voting strategies further increases the 3reliability of the ensemble as the end-user can select and trace the effects of the mechanism for bounding box predictions.


Author(s):  
Megan Strait ◽  
Florian Lier ◽  
Jasmin Bernotat ◽  
Sven Wachsmuth ◽  
Friederike Eyssel ◽  
...  
Keyword(s):  

Robotica ◽  
2020 ◽  
pp. 1-14
Author(s):  
Chen Hao ◽  
Liu Chengju ◽  
Chen Qijun

SUMMARY Self-localization in highly dynamic environments is still a challenging problem for humanoid robots with limited computation resource. In this paper, we propose a dual-channel unscented particle filter (DC-UPF)-based localization method to address it. A key novelty of this approach is that it employs a dual-channel switch mechanism in measurement updating procedure of particle filter, solving for sparse vision feature in motion, and it leverages data from a camera, a walking odometer, and an inertial measurement unit. Extensive experiments with an NAO robot demonstrate that DC-UPF outperforms UPF and Monte–Carlo localization with regard to accuracy.


2021 ◽  
Author(s):  
Weizhao Li ◽  
Feng Ge ◽  
Yi Cai ◽  
Da Ren
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document