laser range
Recently Published Documents


TOTAL DOCUMENTS

1247
(FIVE YEARS 86)

H-INDEX

41
(FIVE YEARS 2)

Spine ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Takafumi Koyama ◽  
Koji Fujita ◽  
Hirotaka Iijima ◽  
Mio Norose ◽  
Takuya Ibara ◽  
...  

2021 ◽  
Author(s):  
Muhammad Dafa Geraldine Putra Malik ◽  
Dadet Pramadihanto ◽  
Bima Sena Bayu Dewantara
Keyword(s):  

2021 ◽  
Vol 14 (1) ◽  
pp. 20-23
Author(s):  
Ákos Mándi ◽  
Jeney Máté ◽  
Dominik Rózsa ◽  
Stefan Oniga

Abstract In this paper we present the partial results of a research in progress made in order to develop a prototype of a self-driving car’s controller and processing unit. The framework that we used consisted of a camera for input of visual imagery information (Logitech 720p), a laser range finder for depth and object sensing (Parallax; PulsedLight LIDAR-Lite v2), and the main processing board, an FPGA based accelerator board PYNQ Z2.


2021 ◽  
Vol 11 (16) ◽  
pp. 7522
Author(s):  
Yoshitaka Kasai ◽  
Yutaka Hiroi ◽  
Kenzaburo Miyawaki ◽  
Akinori Ito

The development of robots that play with humans is a challenging topic for robotics. We are developing a robot that plays tag with human players. To realize such a robot, it needs to observe the players and obstacles around it, chase a target player, and touch the player without collision. To achieve this task, we propose two methods. The first one is the player tracking method, by which the robot moves towards a virtual circle surrounding the target player. We used a laser range finder (LRF) as a sensor for player tracking. The second one is a motion control method after approaching the player. Here, the robot moves away from the player by moving towards the opposite side to the player. We conducted a simulation experiment and an experiment using a real robot. Both experiments proved that with the proposed tracking method, the robot properly chased the player and moved away from the player without collision. The contribution of this paper is the development of a robot control method to approach a human and then move away safely.


2021 ◽  
Vol 24 (4) ◽  
pp. 28-31
Author(s):  
Akarsh Prabhakara ◽  
Vaibhav Singh ◽  
Swarun Kumar ◽  
Anthony G. Rowe

Tire wear is a leading cause of accidents. Tire wear is measured either manually, or by embedding sensors in tires, or using off-tire sensors. Manual sensing is extremely tedious. Sensors embedded in tire treads are challenging to design and expensive to embed. Off-tire sensors like laser range finders are prone to debris that may settle in grooves. To overcome these shortcomings, we propose a mmWave radar based tire wear sensor, which is easy to install, and continuously provides accurate and robust tire wear measurements even in the presence of debris.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1772
Author(s):  
Gengyu Ge ◽  
Yi Zhang ◽  
Qin Jiang ◽  
Wei Wang

Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization (MCL) method spreads particles on the map and calculates the position of the robot by a probabilistic algorithm. However, this can be difficult, especially in symmetrical environments, because landmarks or features may not be sufficient to determine the robot’s orientation. Sometimes the position is not unique if a robot does not stay at the geometric center. This paper presents a novel approach to solving the robot localization problem in a symmetrical environment using the visual features-assisted method. Laser range measurements are used to estimate the robot position, while visual features determine its orientation. Firstly, we convert laser range scans raw data into coordinate data and calculate the geometric center. Secondly, we calculate the new distance from the geometric center point to all end points and find the longest distances. Then, we compare those distances, fit lines, extract corner points, and calculate the distance between adjacent corner points to determine whether the environment is symmetrical. Finally, if the environment is symmetrical, visual features based on the ORB keypoint detector and descriptor will be added to the system to determine the orientation of the robot. The experimental results show that our approach can successfully determine the position of the robot in a symmetrical environment, while ordinary MCL and its extension localization method always fail.


Sign in / Sign up

Export Citation Format

Share Document