robot perception
Recently Published Documents


TOTAL DOCUMENTS

133
(FIVE YEARS 36)

H-INDEX

13
(FIVE YEARS 3)

2022 ◽  
Vol 15 ◽  
Author(s):  
Jinsheng Yuan ◽  
Wei Guo ◽  
Fusheng Zha ◽  
Pengfei Wang ◽  
Mantian Li ◽  
...  

The hippocampus and its accessory are the main areas for spatial cognition. It can integrate paths and form environmental cognition based on motion information and then realize positioning and navigation. Learning from the hippocampus mechanism is a crucial way forward for research in robot perception, so it is crucial to building a calculation method that conforms to the biological principle. In addition, it should be easy to implement on a robot. This paper proposes a bionic cognition model and method for mobile robots, which can realize precise path integration and cognition of space. Our research can provide the basis for the cognition of the environment and autonomous navigation for bionic robots.


2021 ◽  
Vol 2136 (1) ◽  
pp. 012053
Author(s):  
Zeyu Chen

Abstract With the rapid increase in the number of people living in the elderly population, reducing and dealing with the problem of falls in the elderly has become the focus of research for decades. It is impossible to completely eliminate falls in daily life and activities. Detecting a fall in time can protect the elderly from injury as much as possible. This article uses the Turtlebot robot and the ROS robot operating system, combined with simultaneous positioning and map construction technology, Monte Carlo positioning, A* path planning, dynamic window method, and indoor map navigation. The YOLO network is trained using the stance and fall data sets, and the YOLOv4 target detection algorithm is combined with the robot perception algorithm to finally achieve fall detection on the turtlebot robot, and use the average precision, precision, recall and other indicators to measure.


2021 ◽  
pp. 103975
Author(s):  
Niki Efthymiou ◽  
Panagiotis P. Filntisis ◽  
Petros Koutras ◽  
Antigoni Tsiami ◽  
Jack Hadfield ◽  
...  
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 6956
Author(s):  
Chao Fan ◽  
Zhenyu Yin ◽  
Fulong Xu ◽  
Anying Chai ◽  
Feiqing Zhang

In recent years, self-supervised monocular depth estimation has gained popularity among researchers because it uses only a single camera at a much lower cost than the direct use of laser sensors to acquire depth. Although monocular self-supervised methods can obtain dense depths, the estimation accuracy needs to be further improved for better applications in scenarios such as autonomous driving and robot perception. In this paper, we innovatively combine soft attention and hard attention with two new ideas to improve self-supervised monocular depth estimation: (1) a soft attention module and (2) a hard attention strategy. We integrate the soft attention module in the model architecture to enhance feature extraction in both spatial and channel dimensions, adding only a small number of parameters. Unlike traditional fusion approaches, we use the hard attention strategy to enhance the fusion of generated multi-scale depth predictions. Further experiments demonstrate that our method can achieve the best self-supervised performance both on the standard KITTI benchmark and the Make3D dataset.


2021 ◽  
Author(s):  
Patrick Mania ◽  
Franklin Kenghagho Kenfack ◽  
Michael Neumann ◽  
Michael Beetz
Keyword(s):  

Author(s):  
Hongchen Luo ◽  
Wei Zhai ◽  
Jing Zhang ◽  
Yang Cao ◽  
Dacheng Tao

Affordance detection refers to identifying the potential action possibilities of objects in an image, which is an important ability for robot perception and manipulation. To empower robots with this ability in unseen scenarios, we consider the challenging one-shot affordance detection problem in this paper, i.e., given a support image that depicts the action purpose, all objects in a scene with the common affordance should be detected. To this end, we devise a One-Shot Affordance Detection (OS-AD) network that firstly estimates the purpose and then transfers it to help detect the common affordance from all candidate images. Through collaboration learning, OS-AD can capture the common characteristics between objects having the same underlying affordance and learn a good adaptation capability for perceiving unseen affordances. Besides, we build a Purpose-driven Affordance Dataset (PAD) by collecting and labeling 4k images from 31 affordance and 72 object categories. Experimental results demonstrate the superiority of our model over previous representative ones in terms of both objective metrics and visual quality. The benchmark suite is at ProjectPage.


2021 ◽  
Author(s):  
Guang Chen ◽  
Yinlong Liu ◽  
Jinhu Dong ◽  
Lijun Zhang ◽  
Haotian Liu ◽  
...  

2021 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Zörner ◽  
Emy Arts ◽  
Brenda Vasiljevic ◽  
Ankit Srivastava ◽  
Florian Schmalzl ◽  
...  

As robots become more advanced and capable, developing trust is an important factor of human-robot interaction and cooperation. However, as multiple environmental and social factors can influence trust, it is important to develop more elaborate scenarios and methods to measure human-robot trust. A widely used measurement of trust in social science is the investment game. In this study, we propose a scaled-up, immersive, science fiction Human-Robot Interaction (HRI) scenario for intrinsic motivation on human-robot collaboration, built upon the investment game and aimed at adapting the investment game for human-robot trust. For this purpose, we utilize two Neuro-Inspired COmpanion (NICO) - robots and a projected scenery. We investigate the applicability of our space mission experiment design to measure trust and the impact of non-verbal communication. We observe a correlation of 0.43 (p=0.02) between self-assessed trust and trust measured from the game, and a positive impact of non-verbal communication on trust (p=0.0008) and robot perception for anthropomorphism (p=0.007) and animacy (p=0.00002). We conclude that our scenario is an appropriate method to measure trust in human-robot interaction and also to study how non-verbal communication influences a human’s trust in robots.


Sign in / Sign up

Export Citation Format

Share Document