scholarly journals USAGE OF MULTIPLE LIDAR SENSORS ON A MOBILE SYSTEM FOR THE DETECTION OF PERSONS WITH IMPLICIT SHAPE MODELS

Author(s):  
B. Borgmann ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

The focus of this paper is the processing of data from multiple LiDAR (light detection and ranging) sensors for the purpose of detecting persons in that data. Many LiDAR sensors (e.g., laser scanners) use a rotating scan head, which makes it difficult to properly timesynchronize multiple of such LiDAR sensors. An improper synchronization between LiDAR sensors causes temporal distortion effects if their data are directly merged. A merging of data is desired, since it could increase the data density and the perceived area. For the usage in person and object detection tasks, we present an alternative which circumvents the problem by performing the merging of multi-sensor data in the voting space of a method that is based on Implicit Shape Models (ISM). Our approach already assumes that there exist some uncertainties in the voting space. Therefore it is robust against additional uncertainties induced by temporal distortions. Unlike many existing approaches for object detection in 3D data, our approach does not rely on a segmentation step in the data preprocessing. We show that our merging of multi-sensor information in voting space has its advantages in comparison to a direct data merging, especially in situations with a lot of distortion effects.

Sensors are gadgets, which can screen temperature, moistness, weight, commotion levels, setting mindfulness, lighting condition and identify speed, position, and size of an Object. Sensor information are getting accumulated in gigantic amount thus they are overseen utilizing NOSQL. The information will be gathered in an IOT cloud stage where it will be additionally prepared with machine learning methods for prescient examination. What's more, eventually with the required answer for the business structure will be created. This paper explain the proposed system for IoT data collection with AWS (Amazon Web Service) cloud platform. Various system components like Kinesis stream, M2M platform, Notification service and secured IoT service layout. The complete BMS system architecture is detailed in this paper.


Robotica ◽  
1986 ◽  
Vol 4 (2) ◽  
pp. 93-100 ◽  
Author(s):  
S. S. Iyengar ◽  
C. C. Jorgensen ◽  
S. V. N. Rao ◽  
C. R. Weisbin

SUMMARYFinding optimal paths for robot navigation in a known terrain has been studied for some time but, in many important situations, a robot would be required to navigate in completely new or partially explored terrain. We propose a method of robot navigation which requires no pre-learned model, makes maximal use of available information, records and synthesizes information from multiple journeys, and contains concepts of learning that allow for continuous transition from local to global path optimality. The model of the terrain consists of a spatial graph and a Voronoi diagram. Using acquired sensor data, polygonal boundaries containing perceived obstacles shrink to approximate the actual obstacles surfaces, free space for transit is correspondingly enlarged, and additional nodes and edges are recorded based on path intersections and stop points. Navigation planning is gradually accelerated with experience since improved global map information minimizes the need for further sensor data acquisition. Our method currently assumes obstacle locations are unchanging, navigation can be successfully conducted using two-dimensional projections, and sensor information is precise.


2021 ◽  
Author(s):  
Yujie Yan ◽  
Jerome F. Hajjar

Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.


Author(s):  
O. Sekkas ◽  
S. Hadjiefthymiades ◽  
E. Zervas

During the past few years, several location systems have been proposed that use multiple technologies simultaneously in order to locate a user. One such system is described in this article. It relies on multiple sensor readings from Wi-Fi access points, IR beacons, RFID tags, and so forth to estimate the location of a user. This technique is known better as sensor information fusion, which aims to improve accuracy and precision by integrating heterogeneous sensor observations. The proposed location system uses a fusion engine that is based on dynamic Bayesian networks (DBNs), thus substantially improving the accuracy and precision.


Author(s):  
Peyakunta Bhargavi ◽  
Singaraju Jyothi

The moment we live in today demands the convergence of the cloud computing, fog computing, machine learning, and IoT to explore new technological solutions. Fog computing is an emerging architecture intended for alleviating the network burdens at the cloud and the core network by moving resource-intensive functionalities such as computation, communication, storage, and analytics closer to the end users. Machine learning is a subfield of computer science and is a type of artificial intelligence (AI) that provides machines with the ability to learn without explicit programming. IoT has the ability to make decisions and take actions autonomously based on algorithmic sensing to acquire sensor data. These embedded capabilities will range across the entire spectrum of algorithmic approaches that is associated with machine learning. Here the authors explore how machine learning methods have been used to deploy the object detection, text detection in an image, and incorporated for better fulfillment of requirements in fog computing.


2016 ◽  
Vol 693 ◽  
pp. 1397-1404 ◽  
Author(s):  
Qi Long Wang ◽  
Jian Yong Li ◽  
Hai Kuo Shen ◽  
Teng Teng Song ◽  
Yan Xuan Ma

The system of binocular vision sensor was used in the air-to-air close air target positioning in the paper. Due to the limitation of model itself, the measurement accuracy along the direction of optical axis is far lower than the accuracy of vertical direction. In order to improve the measurement accuracy of the optical axis, the paper put forward to using laser range sensor to cooperate with binocular vision sensor; Then the paper proposed adopts adaptive weighted fusion algorithm of multi-sensor information fusion to improve the utilization efficiency of multi-sensor information and to make the results accurately; Finally, the parameters of the system were calibration respectively and experiment is simulated, experimental results show that the position system is feasibility and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document