scholarly journals Robust 2D Mapping Integrating with 3D Information for the Autonomous Mobile Robot Under Dynamic Environment

Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1503 ◽  
Author(s):  
Bin Zhang ◽  
Masahide Kaneko ◽  
Hun-ok Lim

In order to move around automatically, mobile robots usually need to recognize their working environment first. Simultaneous localization and mapping (SLAM) has become an important research field recently, by which the robot can generate a map while moving around. Both two-dimensional (2D) mapping and three-dimensional (3D) mapping methods have been developed greatly with high accuracy. However, 2D maps cannot reflect the space information of the environment and 3D mapping needs long processing time. Moreover, conventional SLAM methods based on grid maps take a long time to delete the moving objects from the map and are hard to delete the potential moving objects. In this paper, a 2D mapping method integrating with 3D information based on immobile area occupied grid maps is proposed. Objects in 3D space are recognized and their space information (e.g., shapes) and properties (moving objects or potential moving objects like people standing still) are projected to the 2D plane for updating the 2D map. By using the immobile area occupied grid map method, recognized still objects are reflected to the map quickly by updating the immobile area occupancy probability with a high coefficient. Meanwhile, recognized moving objects and potential moving objects are not used for updating the map. The unknown objects are reflected to the 2D map with a lower immobile area occupancy probability so that they can be deleted quickly once they are recognized as moving objects or start to move. The effectiveness of our method is proven by experiments of mapping under dynamic indoor environment using a mobile robot.

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xiong Zhao ◽  
Tao Zuo ◽  
Xinyu Hu

Most of the current visual Simultaneous Localization and Mapping (SLAM) algorithms are designed based on the assumption of a static environment, and their robustness and accuracy in the dynamic environment do not behave well. The reason is that moving objects in the scene will cause the mismatch of features in the pose estimation process, which further affects its positioning and mapping accuracy. In the meantime, the three-dimensional semantic map plays a key role in mobile robot navigation, path planning, and other tasks. In this paper, we present OFM-SLAM: Optical Flow combining MASK-RCNN SLAM, a novel visual SLAM for semantic mapping in dynamic indoor environments. Firstly, we use the Mask-RCNN network to detect potential moving objects which can generate masks of dynamic objects. Secondly, an optical flow method is adopted to detect dynamic feature points. Then, we combine the optical flow method and the MASK-RCNN for full dynamic points’ culling, and the SLAM system is able to track without these dynamic points. Finally, the semantic labels obtained from MASK-RCNN are mapped to the point cloud for generating a three-dimensional semantic map that only contains the static parts of the scenes and their semantic information. We evaluate our system in public TUM datasets. The results of our experiments demonstrate that our system is more effective in dynamic scenarios, and the OFM-SLAM can estimate the camera pose more accurately and acquire a more precise localization in the high dynamic environment.


2013 ◽  
Vol 39 (4) ◽  
pp. 364-372 ◽  
Author(s):  
Y. Edirisinghe ◽  
J. M. Troupis ◽  
M. Patel ◽  
J. Smith ◽  
M. Crossett

We used a dynamic three-dimensional (3D) mapping method to model the wrist in dynamic unrestricted dart throwers motion in three men and four women. With the aid of precision landmark identification, a 3D coordinate system was applied to the distal radius and the movement of the carpus was described. Subsequently, with dynamic 3D reconstructions and freedom to position the camera viewpoint anywhere in space, we observed the motion pathways of all carpal bones in dart throwers motion and calculated its axis of rotation. This was calculated to lie in 27° of anteversion from the coronal plane and 44° of varus angulation relative to the transverse plane. This technique is a safe and a feasible carpal imaging method to gain key information for decision making in future hand surgical and rehabilitative practices.


2015 ◽  
Vol 27 (4) ◽  
pp. 356-364 ◽  
Author(s):  
Masatoshi Nomatsu ◽  
◽  
Youhei Suganuma ◽  
Yosuke Yui ◽  
Yutaka Uchimura

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/05.jpg"" width=""200"" /> Developed autonomous mobile robot</div> In describing real-world self-localization and target-search methods, this paper discusses a mobile robot developed to verify a method proposed in Tsukuba Challenge 2014. The Tsukaba Challenge course includes promenades and parks containing ordinary pedestrians and bicyclists that require the robot to move toward a goal while avoiding the moving objects around it. Common self-localization methods often include 2D laser range finders (LRFs), but such LRFs do not always capture enough data for localization if, for example, the scanned plane has few landmarks. To solve this problem, we used a three-dimensional (3D) LRF for self-localization. The 3D LRF captures more data than the 2D type, resulting in more robust localization. Robots that provide practical services in real life must, among other functions, recognize a target and serve it autonomously. To enable robots to do so, this paper describes a method for searching for a target by using a cluster point cloud from the 3D LRF together with image processing of colored images captured by cameras. In Tsukuba Challenge 2014, the robot we developed providing the proposed methods completed the course and found the targets, verifying the effectiveness of our proposals. </span>


Robotica ◽  
2015 ◽  
Vol 34 (11) ◽  
pp. 2592-2609 ◽  
Author(s):  
Anderson Souza ◽  
Luiz M. G. Gonçalves

SUMMARYThis paper proposes an alternative environment mapping method for accurate robotic navigation based on 3D information. Typical techniques for 3D mapping using occupancy grid require intensive computational workloads in order to both build and store the map. This work introduces an Occupancy-Elevation Grid (OEG) mapping technique, which is a discrete mapping approach where each cell represents the occupancy probability, the height of the terrain and its variance. This representation allows a mobile robot to know with an accurate degree of certainty whether a place in the environment is occupied by an obstacle and the height of such obstacle. Thus, based on its hardware characteristics, it can make calculations to decide if it is possible to traverse that specific place. In general, the map representation introduced can be used in conjunction with any kind of distance sensor. In this work, we use laser range data and stereo system data with a probabilistic treatment. The resulting maps allow the execution of tasks as decision making for autonomous navigation, exploration, localization and path planning, considering the existence and the height of the obstacles. Experiments carried out with real data demonstrate that the proposed approach yields useful maps for autonomous navigation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Agus Budi Dharmawan ◽  
Shinta Mariana ◽  
Gregor Scholz ◽  
Philipp Hörmann ◽  
Torben Schulze ◽  
...  

AbstractPerforming long-term cell observations is a non-trivial task for conventional optical microscopy, since it is usually not compatible with environments of an incubator and its temperature and humidity requirements. Lensless holographic microscopy, being entirely based on semiconductor chips without lenses and without any moving parts, has proven to be a very interesting alternative to conventional microscopy. Here, we report on the integration of a computational parfocal feature, which operates based on wave propagation distribution analysis, to perform a fast autofocusing process. This unique non-mechanical focusing approach was implemented to keep the imaged object staying in-focus during continuous long-term and real-time recordings. A light-emitting diode (LED) combined with pinhole setup was used to realize a point light source, leading to a resolution down to 2.76 μm. Our approach delivers not only in-focus sharp images of dynamic cells, but also three-dimensional (3D) information on their (x, y, z)-positions. System reliability tests were conducted inside a sealed incubator to monitor cultures of three different biological living cells (i.e., MIN6, neuroblastoma (SH-SY5Y), and Prorocentrum minimum). Altogether, this autofocusing framework enables new opportunities for highly integrated microscopic imaging and dynamic tracking of moving objects in harsh environments with large sample areas.


Robotica ◽  
2009 ◽  
Vol 27 (4) ◽  
pp. 499-509 ◽  
Author(s):  
K. Bendjilali ◽  
F. Belkhouche

SUMMARYThis paper deals with the problem of collision course checking in a dynamic environment for mobile robotics applications. Our method is based on the relative kinematic equations between moving objects. These kinematic equations are written under polar form. A transformation of coordinates is derived. Under this transformation, collision between two moving objects is reduced to collision between a stationary object and a virtual moving object. In addition to the direct collision course, we define the indirect collision course, which is more critical and difficult to detect. Under this formulation, the collision course problem is simplified, and complex scenarios are reduced to simple scenarios. In three-dimensional (3D) settings, the working space is decomposed into two planes: the horizontal plane and the vertical plane. The collision course detection in 3D is studied in the vertical and horizontal planes using 2D techniques. This formulation brings important simplifications to the collision course detection problem even in the most critical and difficult scenarios. An extensive simulation is used to illustrate the method in 2D and 3D working spaces.


1996 ◽  
Vol 8 (6) ◽  
pp. 555-560
Author(s):  
Masafumi Uchida ◽  
◽  
Hideto Ide ◽  
Syuichi Yokoyama ◽  

This study especially take notice of a path plan of a mobile robot in a motion plan of an autonomous robot. We considered that a transfer from a pattern of environment information to one of a plan is a path plan and studied ways to proccess an occupied region pattern of a working environment in parallel. This Paper proposes a technique for to forecast of environment changes and to plan ahead motion for the future.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


Author(s):  
Halit Dogan ◽  
Md Mahbub Alam ◽  
Navid Asadizanjani ◽  
Sina Shahbazmohamadi ◽  
Domenic Forte ◽  
...  

Abstract X-ray tomography is a promising technique that can provide micron level, internal structure, and three dimensional (3D) information of an integrated circuit (IC) component without the need for serial sectioning or decapsulation. This is especially useful for counterfeit IC detection as demonstrated by recent work. Although the components remain physically intact during tomography, the effect of radiation on the electrical functionality is not yet fully investigated. In this paper we analyze the impact of X-ray tomography on the reliability of ICs with different fabrication technologies. We perform a 3D imaging using an advanced X-ray machine on Intel flash memories, Macronix flash memories, Xilinx Spartan 3 and Spartan 6 FPGAs. Electrical functionalities are then tested in a systematic procedure after each round of tomography to estimate the impact of X-ray on Flash erase time, read margin, and program operation, and the frequencies of ring oscillators in the FPGAs. A major finding is that erase times for flash memories of older technology are significantly degraded when exposed to tomography, eventually resulting in failure. However, the flash and Xilinx FPGAs of newer technologies seem less sensitive to tomography, as only minor degradations are observed. Further, we did not identify permanent failures for any chips in the time needed to perform tomography for counterfeit detection (approximately 2 hours).


Sign in / Sign up

Export Citation Format

Share Document