Evolving a Vision-Driven Robot Controller for Real-World Indoor Navigation

Author(s):  
Paweł Gajda ◽  
Krzysztof Krawiec

2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Shinpei Ito ◽  
Akinori Takahashi ◽  
Ruochen Si ◽  
Masatoshi Arikawa

<p><strong>Abstract.</strong> AR (Augmented Reality) could be realized as a basic and high-level function on latest smartphones with a reasonable price. AR enables users to experience consistent three-dimensional (3D) spaces co-existing with 3D real and virtual objects with sensing real 3D environments and reconstructing them in the virtual world through a camera. The accuracy of sensing real 3D environments using an AR function, that is, visual-inertial odometer, of a smartphone is extremely higher than one of a GPS receiver on it, and can be less than one centimeter. However, current common AR applications generally focus on “small” real 3D spaces, not large real 3D spaces. In other words, most of the current AR applications are not designed for uses based on a geographic coordinate system.</p><p>We proposed a global extension of the visual-inertial odometer with an image recognition function of geo-referenced image markers installed in real 3D spaces. Examples of geo-referenced image markers can be generated from analog guide boards existing in the real world. We tested this framework of a global extension of the visual-inertial odometer embedded in a smartphone on the first floor in the central library of Akita University. The geo-referenced image markers such as floor map boards and book categories sign boards were registered in a database of 3D geo-referenced real-world scene images. Our prototype system developed on a smartphone, that is, iPhone XS, Apple Inc., could first recognized a floor map board (Fig. 1), and could determine the 3D precise distance and direction of the smartphone from the central position of the floor map board in a local 3D coordinate space with the origin point as the central positon of the board. Then, the system could convert the relative precise position and the relative direction of the smartphone’s camera in a local coordinate space into a global precise location and orientation of it. A subject was walking the first floor in the building of the library with a world tracking function of the smartphone. The experimental result shows that the error of tracking a real 3D space of a global coordinate system was accumulated, but not bad. The accumulated error was only about 30 centimeters after the subject’s walking about 30 meters (Fig. 2). We are now planning to improve our prototype system in the accuracy of indoor navigation with calibrating the location and orientation of a smartphone based sequential recognitions of multiple referenced scene image markers which have already existed for a general user services of the library before developing this proposed new services. As the conclusion, the experiment’s result of testing our prototype system was impressive, we are now preparing a more practical high-precision LBS which enables a user to be navigated to the exact location of a book of a user’s interest in a bookshelf on a floor with AR and floor map interfaces.</p>



2021 ◽  
Vol 7 ◽  
pp. e704
Author(s):  
Wei Ma ◽  
Shuai Zhang ◽  
Jincai Huang

Unlike traditional visualization methods, augmented reality (AR) inserts virtual objects and information directly into digital representations of the real world, which makes these objects and data more easily understood and interactive. The integration of AR and GIS is a promising way to display spatial information in context. However, most existing AR-GIS applications only provide local spatial information in a fixed location, which is exposed to a set of problems, limited legibility, information clutter and the incomplete spatial relationships. In addition, the indoor space structure is complex and GPS is unavailable, so that indoor AR systems are further impeded by the limited capacity of these systems to detect and display location and semantic information. To address this problem, the localization technique for tracking the camera positions was fused by Bluetooth low energy (BLE) and pedestrian dead reckoning (PDR). The multi-sensor fusion-based algorithm employs a particle filter. Based on the direction and position of the phone, the spatial information is automatically registered onto a live camera view. The proposed algorithm extracts and matches a bounding box of the indoor map to a real world scene. Finally, the indoor map and semantic information were rendered into the real world, based on the real-time computed spatial relationship between the indoor map and live camera view. Experimental results demonstrate that the average positioning error of our approach is 1.47 m, and 80% of proposed method error is within approximately 1.8 m. The positioning result can effectively support that AR and indoor map fusion technique links rich indoor spatial information to real world scenes. The method is not only suitable for traditional tasks related to indoor navigation, but it is also promising method for crowdsourcing data collection and indoor map reconstruction.



Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1123
Author(s):  
David Jurado ◽  
Juan M. Jurado ◽  
Lidia Ortega ◽  
Francisco R. Feito

Mixed reality (MR) enables a novel way to visualize virtual objects on real scenarios considering physical constraints. This technology arises with other significant advances in the field of sensors fusion for human-centric 3D capturing. Recent advances for scanning the user environment, real-time visualization and 3D vision using ubiquitous systems like smartphones allow us to capture 3D data from the real world. In this paper, a disruptive application for assessing the status of indoor infrastructures is proposed. The installation and maintenance of hidden facilities such as water pipes, electrical lines and air conditioning tubes, which are usually occluded behind the wall, supposes tedious and inefficient tasks. Most of these infrastructures are digitized but they cannot be visualized onsite. In this research, we focused on the development of a new application (GEUINF) to be launched on smartphones that are capable of capturing 3D data of the real world by depth sensing. This information is relevant to determine the user position and orientation. Although previous approaches used fixed markers for this purpose, our application enables the estimation of both parameters with a centimeter accuracy without them. This novelty is possible since our method is based on a matching process between reconstructed walls of the real world and 3D planes of the replicated world in a virtual environment. Our markerless approach is based on scanning planar surfaces of the user environment and then, these are geometrically aligned with their corresponding virtual 3D entities. In a preprocessing phase, the 2D CAD geometry available from an architectural project is used to generate 3D models of an indoor building structure. In real time, these virtual elements are tracked with the real ones modeled by using ARCore library. Once the alignment between virtual and real worlds is done, the application enables the visualization, navigation and interaction with the virtual facility networks in real-time. Thus, our method may be used by private companies and public institutions responsible of the indoor facilities management and also may be integrated with other applications focused on indoor navigation.



Author(s):  
Paul Lozovyy ◽  
George Thomas ◽  
Dan Simon

This research involves the development of an engineering test for a newly-developed evolutionary algorithm called biogeography-based optimization (BBO), and also involves the development of a distributed implementation of BBO. The BBO algorithm is based on mathematical models of biogeography, which describe the migration of species between habitats. BBO is the adaptation of the theory of biogeography for the purpose of solving general optimization problems. In this research, BBO is used to tune a proportional-derivative control system for real-world mobile robots. The authors show that BBO can successfully tune the control algorithm of the robots, reducing their tracking error cost function by 65% from nominal values. This chapter focuses on describing the hardware, software, and the results that have been obtained by various implementations of BBO.



Author(s):  
Keith Stouffer

Abstract Virtual objects in a web-based environment can be interfaced to and controlled by external real world controllers. A Virtual Reality Modeling Language (VRML) welding cell was created that models a robotic arc welding cell (the Automated Welding Manufacturing System project,) located at the National Institute of Standards and Technology (NIST). The VRML welding cell contains a model of a 7 degree-of-freedom robot, a welding table, torch and various fixtures and parts. The VRML robot is interfaced to, and can be controlled by, the real world robot controller. This is accomplished by a socket connection between the collaborator’s web browser and the real world controller. The current joint angles of the robot, which are stored in a world model buffer in the controller, are collected by a Java applet running on the web page. The applet updates the VRML model of the robot via the External Authoring Interface (EAI) of the VRML plug-in. Virtual welds, a series of VRML cylinders, are also dynamically created every 100 ms on the part based on the current robot position and colored according to the calculated weld quality of that section of weld obtained from the real world controller. This allows a collaborator to visually determine where a bad section of weld has occurred without being present in the physical welding lab.



Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5890
Author(s):  
Bo-Chen Huang ◽  
Jiun Hsu ◽  
Edward T.-H. Chu ◽  
Hui-Mei Wu

Due to the popularity of indoor positioning technology, indoor navigation applications have been deployed in large buildings, such as hospitals, airports, and train stations, to guide visitors to their destinations. A commonly-used user interface, shown on smartphones, is a 2D floor map with a route to the destination. The navigation instructions, such as turn left, turn right, and go straight, pop up on the screen when users come to an intersection. However, owing to the restrictions of a 2D navigation map, users may face mental pressure and get confused while they are making a connection between the real environment and the 2D navigation map before moving forward. For this reason, we developed ARBIN, an augmented reality-based navigation system, which posts navigation instructions on the screen of real-world environments for ease of use. Thus, there is no need for users to make a connection between the navigation instructions and the real-world environment. In order to evaluate the applicability of ARBIN, a series of experiments were conducted in the outpatient area of the National Taiwan University Hospital YunLin Branch, which is nearly 1800 m2, with 35 destinations and points of interests, such as a cardiovascular clinic, x-ray examination room, pharmacy, and so on. Four different types of smartphone were adopted for evaluation. Our results show that ARBIN can achieve 3 to 5 m accuracy, and provide users with correct instructions on their way to the destinations. ARBIN proved to be a practical solution for indoor navigation, especially for large buildings.



Author(s):  
Natália Souza Soares ◽  
João Marcelo Xavier Natário Teixeira ◽  
Veronica Teichrieb

In this work, we propose a framework to train a robot in a virtual environment using Reinforcement Learning (RL) techniques and thus facilitating the use of this type of approach in robotics. With our integrated solution for virtual training, it is possible to programmatically change the environment parameters, making it easy to implement domain randomization techniques on-the-fly. We conducted experiments with a TurtleBot 2i in an indoor navigation task with static obstacle avoidance using an RL algorithm called Proximal Policy Optimization (PPO). Our results show that even though the training did not use any real data, the trained model was able to generalize to different virtual environments and real-world scenes.



Sign in / Sign up

Export Citation Format

Share Document