Obtaining Liftoff Indoors: Autonomous Navigation in Confined Indoor Environments

2013 ◽  
Vol 20 (4) ◽  
pp. 40-48 ◽  
Author(s):  
Shaojie Shen ◽  
Nathan Michael ◽  
Vijay Kumar
2021 ◽  
Vol 15 (03) ◽  
pp. 337-357
Author(s):  
Alexander Julian Golkowski ◽  
Marcus Handte ◽  
Peter Roch ◽  
Pedro J. Marrón

For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.


Agriculture ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 954
Author(s):  
Abhijeet Ravankar ◽  
Ankit A. Ravankar ◽  
Arpit Rawankar ◽  
Yohei Hoshino

In recent years, autonomous robots have extensively been used to automate several vineyard tasks. Autonomous navigation is an indispensable component of such field robots. Autonomous and safe navigation has been well studied in indoor environments and many algorithms have been proposed. However, unlike structured indoor environments, vineyards pose special challenges for robot navigation. Particularly, safe robot navigation is crucial to avoid damaging the grapes. In this regard, we propose an algorithm that enables autonomous and safe robot navigation in vineyards. The proposed algorithm relies on data from a Lidar sensor and does not require a GPS. In addition, the proposed algorithm can avoid dynamic obstacles in the vineyard while smoothing the robot’s trajectories. The curvature of the trajectories can be controlled, keeping a safe distance from both the crop and the dynamic obstacles. We have tested the algorithm in both a simulation and with robots in an actual vineyard. The results show that the robot can safely navigate the lanes of the vineyard and smoothly avoid dynamic obstacles such as moving people without abruptly stopping or executing sharp turns. The algorithm performs in real-time and can easily be integrated into robots deployed in vineyards.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guangbing Zhou ◽  
Jing Luo ◽  
Shugong Xu ◽  
Shunqing Zhang ◽  
Shige Meng ◽  
...  

Purpose Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments. Design/methodology/approach Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments. Findings The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation. Originality/value Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 220 ◽  
Author(s):  
Ruibin Guo ◽  
Keju Peng ◽  
Dongxiang Zhou ◽  
Yunhui Liu

Orientation estimation is a crucial part of robotics tasks such as motion control, autonomous navigation, and 3D mapping. In this paper, we propose a robust visual-based method to estimate robots’ drift-free orientation with RGB-D cameras. First, we detect and track hybrid features (i.e., plane, line, and point) from color and depth images, which provides reliable constraints even in uncharacteristic environments with low texture or no consistent lines. Then, we construct a cost function based on these features and, by minimizing this function, we obtain the accurate rotation matrix of each captured frame with respect to its reference keyframe. Furthermore, we present a vanishing direction-estimation method to extract the Manhattan World (MW) axes; by aligning the current MW axes with the global MW axes, we refine the aforementioned rotation matrix of each keyframe and achieve drift-free orientation. Experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for orientation estimation. In addition, we have applied our proposed visual compass to pose estimation, and the evaluation on public sequences shows improved accuracy.


2020 ◽  
Vol 12 (20) ◽  
pp. 3386
Author(s):  
Juan Sandino ◽  
Fernando Vanegas ◽  
Frederic Maire ◽  
Peter Caccetta ◽  
Conrad Sanderson ◽  
...  

Response efforts in emergency applications such as border protection, humanitarian relief and disaster monitoring have improved with the use of Unmanned Aerial Vehicles (UAVs), which provide a flexibly deployed eye in the sky. These efforts have been further improved with advances in autonomous behaviours such as obstacle avoidance, take-off, landing, hovering and waypoint flight modes. However, most UAVs lack autonomous decision making for navigating in complex environments. This limitation creates a reliance on ground control stations to UAVs and, therefore, on their communication systems. The challenge is even more complex in indoor flight operations, where the strength of the Global Navigation Satellite System (GNSS) signals is absent or weak and compromises aircraft behaviour. This paper proposes a UAV framework for autonomous navigation to address uncertainty and partial observability from imperfect sensor readings in cluttered indoor scenarios. The framework design allocates the computing processes onboard the flight controller and companion computer of the UAV, allowing it to explore dangerous indoor areas without the supervision and physical presence of the human operator. The system is illustrated under a Search and Rescue (SAR) scenario to detect and locate victims inside a simulated office building. The navigation problem is modelled as a Partially Observable Markov Decision Process (POMDP) and solved in real time through the Augmented Belief Trees (ABT) algorithm. Data is collected using Hardware in the Loop (HIL) simulations and real flight tests. Experimental results show the robustness of the proposed framework to detect victims at various levels of location uncertainty. The proposed system ensures personal safety by letting the UAV to explore dangerous environments without the intervention of the human operator.


Sign in / Sign up

Export Citation Format

Share Document