scholarly journals kRadar++: Coarse-to-Fine FMCW Scanning Radar Localisation

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6002 ◽  
Author(s):  
Daniele De Martini ◽  
Matthew Gadd ◽  
Paul Newman

This paper presents a novel two-stage system which integrates topological localisation candidates from a radar-only place recognition system with precise pose estimation using spectral landmark-based techniques. We prove that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade. Offline experiments are conducted on the most extensive radar-focused urban autonomy dataset available to the community with performance comparing favourably with and even rivalling alternative state-of-the-art radar localisation systems. Specifically, we show the long-term durability of the approach and of the sensing technology itself to autonomous navigation. We suggest a range of sensible methods of tuning the system, all of which are suitable for online operation. For both tuning regimes, we achieve, over the course of a month of localisation trials against a single static map, high recalls at high precision, and much reduced variance in erroneous metric pose estimation. As such, this work is a necessary first step towards a radar teach-and-repeat (RTR) system and the enablement of autonomy across extreme changes in appearance or inclement conditions.

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4064
Author(s):  
Can Li ◽  
Ping Chen ◽  
Xin Xu ◽  
Xinyu Wang ◽  
Aijun Yin

In this work, we propose a novel coarse-to-fine method for object pose estimation coupled with admittance control to promote robotic shaft-in-hole assembly. Considering that traditional approaches to locate the hole by force sensing are time-consuming, we employ 3D vision to estimate the axis pose of the hole. Thus, robots can locate the target hole in both position and orientation and enable the shaft to move into the hole along the axis orientation. In our method, first, the raw point cloud of a hole is processed to acquire the keypoints. Then, a coarse axis is extracted according to the geometric constraints between the surface normals and axis. Lastly, axis refinement is performed on the coarse axis to achieve higher precision. Practical experiments verified the effectiveness of the axis pose estimation. The assembly strategy composed of axis pose estimation and admittance control was effectively applied to the robotic shaft-in-hole assembly.


2021 ◽  
Vol 15 (03) ◽  
pp. 337-357
Author(s):  
Alexander Julian Golkowski ◽  
Marcus Handte ◽  
Peter Roch ◽  
Pedro J. Marrón

For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.


Author(s):  
Yamini G. ◽  
Gopinath Ganapathy

Through the integration of advanced algorithms and smart sensing technology in healthcare services, huge medical benefits could be gained by the aged and sick people in determining their activity recognition. Human activity recognition (HAR) is still in the research for the past decades that promotes recognition of physical activities automatically. The main aim of HAR is to obtain and analyze the physical activities of a person, which could be promoted through several in-built sensors examined in the form of video data. Through this technique, necessary information could be obtained that also helps in preventing significant risks and also averts or alerts unfortunate events from happening. However, there is no particular categorization for human activity, and there is no description of the particular events to occur. The objective of this paper is to propose a healthcare information system based on IoT where enhancing activity recognition is the primary focus. Human activities are supposed to be diverse; it is necessary to choose appropriate sensors and the effective placement of those sensors in recognizing specific activities. One of the major challenges here is choosing the appropriate sensor for that particular instance and gathering data under particular circumstances. Due to the large coupling of sensors and their activity monitoring functionality, the solution to promote feasibility for the HAR predicament cannot be determined. A distinguishing feature of this paper is that it includes future users' perspectives.


Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1889 ◽  
Author(s):  
Shuang Liu ◽  
Hongli Xu ◽  
Yang Lin ◽  
Lei Gao

Autonomous underwater vehicles (AUVs) play very important roles in underwater missions. However, the reliability of the automated recovery of AUVs has still not been well addressed. We propose a vision-based framework for automatically recovering an AUV by another AUV in shallow water. The proposed framework contains a detection phase for the robust detection of underwater landmarks mounted on the docking station in shallow water and a pose-estimation phase for estimating the pose between AUVs and underwater landmarks. We propose a Laplacian-of-Gaussian-based coarse-to-fine blockwise (LCB) method for the detection of underwater landmarks to overcome ambient light and nonuniform spreading, which are the two main problems in shallow water. We propose a novel method for pose estimation in practical cases where landmarks are broken or covered by biofouling. In the experiments, we show that our proposed LCB method outperforms the state-of-the-art method in terms of remote landmark detection. We then combine our proposed vision-based framework with acoustic sensors in field experiments to demonstrate its effectiveness in the automated recovery of AUVs.


Sign in / Sign up

Export Citation Format

Share Document