Real-time 3D reconstruction method using massive multi-sensor data analysis and fusion

2019 ◽  
Vol 75 (6) ◽  
pp. 3229-3248 ◽  
Author(s):  
Seoungjae Cho ◽  
Kyungeun Cho
Author(s):  
Xiongfeng Peng ◽  
Liaoyuan Zeng ◽  
Wenyi Wang ◽  
Zhili Liu ◽  
Yifeng Yang ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Ziang Lei

3D reconstruction techniques for animated images and animation techniques for faces are important research in computer graphics-related fields. Traditional 3D reconstruction techniques for animated images mainly rely on expensive 3D scanning equipment and a lot of time-consuming postprocessing manually and require the scanned animated subject to remain in a fixed pose for a considerable period. In recent years, the development of large-scale computing power of computer-related hardware, especially distributed computing, has made it possible to come up with a real-time and efficient solution. In this paper, we propose a 3D reconstruction method for multivisual animated images based on Poisson’s equation theory. The calibration theory is used to calibrate the multivisual animated images, obtain the internal and external parameters of the camera calibration module, extract the feature points from the animated images of each viewpoint by using the corner point detection operator, then match and correct the extracted feature points by using the least square median method, and complete the 3D reconstruction of the multivisual animated images. The experimental results show that the proposed method can obtain the 3D reconstruction results of multivisual animation images quickly and accurately and has certain real-time and reliability.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 64389-64405 ◽  
Author(s):  
Athar Khodabakhsh ◽  
Ismail Ari ◽  
Mustafa Bakir ◽  
Ali Ozer Ercan

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Panlong Gu ◽  
Fengyu Zhou ◽  
Dianguo Yu ◽  
Fang Wan ◽  
Wei Wang ◽  
...  

RGBD camera-based VSLAM (Visual Simultaneous Localization and Mapping) algorithm is usually applied to assist robots with real-time mapping. However, due to the limited measuring principle, accuracy, and distance of the equipped camera, this algorithm has typical disadvantages in the large and dynamic scenes with complex lightings, such as poor mapping accuracy, easy loss of robot position, and much cost on computing resources. Regarding these issues, this paper proposes a new method of 3D interior construction, which combines laser radar and an RGBD camera. Meanwhile, it is developed based on the Cartographer laser SLAM algorithm. The proposed method mainly takes two steps. The first step is to do the 3D reconstruction using the Cartographer algorithm and RGBD camera. It firstly applies the Cartographer algorithm to calculate the pose of the RGBD camera and to generate a submap. Then, a real-time 3D point cloud generated by using the RGBD camera is inserted into the submap, and the real-time interior construction is finished. The second step is to improve Cartographer loop-closure quality by the visual loop-closure for the sake of correcting the generated map. Compared with traditional methods in large-scale indoor scenes, the proposed algorithm in this paper shows higher precision, faster speed, and stronger robustness in such contexts, especially with complex light and dynamic objects, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5909
Author(s):  
Qingyu Jia ◽  
Liang Chang ◽  
Baohua Qiang ◽  
Shihao Zhang ◽  
Wu Xie ◽  
...  

Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.


2008 ◽  
Vol 20 (04) ◽  
pp. 205-218 ◽  
Author(s):  
Jyh-Fa Lee ◽  
Ming-Shium Hsieh ◽  
Chih-Wei Kuo ◽  
Ming-Dar Tsai ◽  
Ming Ma

This paper describes a three-dimensional reconstruction method to provide real-time visual responses for volume (constituted by tomographic slices) based surgery simulations. The proposed system uses dynamical data structures to record tissue triangles obtained from 3D reconstruction computation. Each tissue triangle in the structures can be modified or every structure can be deleted or allocated independently. Moreover, triangle reconstruction is optimized by only deleting or adding vertices from manipulated voxels that are classified as erosion (in which the voxels are changed from tissue to null) or generation (the voxels are changed from null to tissue). Therefore, by manipulating these structures, 3D reconstruction can be locally implemented for only manipulated voxels to achieve the highest efficiency without reconstructing tissue surfaces in the whole volume as general methods do. Three surgery simulation examples demonstrate that the proposed method can provide time-critical visual responses even under other time-consuming computations such as volume manipulations and haptic interactions.


Author(s):  
Ulrich H.P. Fischer ◽  
Sabrina Hoppstock ◽  
Peter Kußmann ◽  
Isabell Steuding

In the industrialized countries, the very old part of the population has been growing rapidly for many years. In the next few years in particular, the age cohort over 65 will increase significantly. This goes hand in hand with illnesses and other physical and cognitive limitations. In order to enable these people to remain in their own homes for as long as possible despite physical and cognitive restrictions, technologies are being used to create ambient assisted living applications. However, most of these systems are neither medically verified nor are latencies short enough, for example, to avoid falls. In order to overcome these problems, a promising approach is to use the new 5G network technology. Combined with a suitable sensor data analysis frame work, the fast care project showed that a real-time situation picture of the patient in the form of an Avatar could be generated. The sensor structure records the heart rate, the breathing rate, analyzes the gait and measures the temperature, the VOC content of the room air, and its humidity. An emergency button has also been integrated. In a laboratory demonstrator, it was shown that the infrastructure realizes a real-time visualization of the sensor data over a heterogeneous network.


2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Sivadi Sivadi ◽  
Moorthy Moorthy ◽  
Vijender Solanki

Introduction: The article is the product of the research “Due to the increase in popularity of Internet of Things (IoT), a huge amount of sensor data is being generated from various smart city applications”, developed at Pondicherry University in the year 2019. Problem:To acquire and analyze the huge amount of sensor-generated data effectively is a significant problem when processing the data. Objective:  To propose a novel framework for IoT sensor data analysis using machine learning based improved Gaussian Mixture Model (GMM) by acquired real-time data.  Methodology:In this paper, the clustering based GMM models are used to find the density patterns on a daily or weekly basis for user requirements. The ThingSpeak cloud platform used for performing analysis and visualizations. Results:An analysis has been performed on the proposed mechanism implemented on real-time traffic data with Accuracy, Precision, Recall, and F-Score as measures. Conclusions:The results indicate that the proposed mechanism is efficient when compared with the state-of-the-art schemes. Originality:Applying GMM and ThingSpeak Cloud platform to perform analysis on IoT real-time data is the first approach to find traffic density patterns on busy roads. Restrictions:There is a need to develop the application for mobile users to find the optimal traffic routes based on density patterns. The authors could not concentrate on the security aspect for finding density patterns.


2017 ◽  
Vol 3 (2) ◽  
pp. 743-747
Author(s):  
Albert Hein ◽  
Florian Grützmacher ◽  
Christian Haubelt ◽  
Thomas Kirste

AbstractMain target of fast care is the development of a real-time capable sensor data analysis framework for intelligent assistive systems in the field of Ambient Assisted Living, eHealth, Tele Rehabilitation, and Tele Care. The aim is to provide a medically valid integrated situation model based on a distributed, ad-hoc connected, energy-efficient sensor infrastructure suitable for daily use. The integrated situation model combining physiological, cognitive, and kinematic information about the patient is grounded on the intelligent fusion of heterogeneous sensor data on different levels. The model can serve as a tool for quickly identifying risk and hazards as well as enable medical assistance systems to autonomously intervene in real-time and actively give telemedical feedback.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2801
Author(s):  
Hasan Asy’ari Arief ◽  
Tomasz Wiktorski ◽  
Peter James Thomas

Real-time monitoring of multiphase fluid flows with distributed fibre optic sensing has the potential to play a major role in industrial flow measurement applications. One such application is the optimization of hydrocarbon production to maximize short-term income, and prolong the operational lifetime of production wells and the reservoir. While the measurement technology itself is well understood and developed, a key remaining challenge is the establishment of robust data analysis tools that are capable of providing real-time conversion of enormous data quantities into actionable process indicators. This paper provides a comprehensive technical review of the data analysis techniques for distributed fibre optic technologies, with a particular focus on characterizing fluid flow in pipes. The review encompasses classical methods, such as the speed of sound estimation and Joule-Thomson coefficient, as well as their data-driven machine learning counterparts, such as Convolutional Neural Network (CNN), Support Vector Machine (SVM), and Ensemble Kalman Filter (EnKF) algorithms. The study aims to help end-users establish reliable, robust, and accurate solutions that can be deployed in a timely and effective way, and pave the wave for future developments in the field.


Sign in / Sign up

Export Citation Format

Share Document