scholarly journals Three-Dimensional Outdoor Analysis of Single Synthetic Building Structures by an Unmanned Flying Agent Using Monocular Vision

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7270
Author(s):  
Andrzej Bielecki ◽  
Piotr Śmigielski

An algorithm designed for analysis and understanding a 3D urban-type environment by an autonomous flying agent, equipped only with a monocular vision, is presented. The algorithm is hierarchical and is based on the structural representation of the analyzed scene. Firstly, the robot observes the scene from a high altitude to build a 2D representation of a single object and a graph representation of the 2D scene. The 3D representation of each object arises as a consequence of the robot’s actions, as a result of which it projects the object’s solid on different planes. The robot assigns the obtained representations to the corresponding vertex of the created graph. The algorithm was tested by using the embodied robot operating on the real scene. The tests showed that the robot equipped with the algorithm was able not only to localize the predefined object, but also to perform safe, collision-free maneuvers close to the structures in the scene.

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5909
Author(s):  
Qingyu Jia ◽  
Liang Chang ◽  
Baohua Qiang ◽  
Shihao Zhang ◽  
Wu Xie ◽  
...  

Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.


2021 ◽  
Vol 290 ◽  
pp. 02029
Author(s):  
ShiYu Zhang ◽  
YingHao Dong ◽  
Tingting Xie ◽  
Xinying Si ◽  
JinZe Li

In order to reproduce the grand look of Hanqing Stadium and restore the most realistic Hanqing Stadium internal layout and structure, a simulation system based on the virtual reality technology for the internal structure layout design of the stadium was designed and developed. Based on the current development trend of virtual reality technology, a three-dimensional dynamic space model was established to restore the real conditions of Hanqing Stadium. By integrating emerging technologies such as panoramic images, panoramic views, and stereo vision into virtual reality technology, the sense of reality was enhanced. 3Dmaxs was used to establish the Hanqing Stadium scene design model, the surrounding and internal scene models were reconstructed, and the experiencers’ feelings about the real scene were restored 1:1. The use of technologies such as panoramic images and views enhances the realism and three-dimensionality of the picture.


2019 ◽  
Vol 12 (4) ◽  
pp. 1-33 ◽  
Author(s):  
Telmo Adão ◽  
Luís Pádua ◽  
David Narciso ◽  
Joaquim João Sousa ◽  
Luís Agrellos ◽  
...  

MixAR, a full-stack system capable of providing visualization of virtual reconstructions seamlessly integrated in the real scene (e.g. upon ruins), with the possibility of being freely explored by visitors, in situ, is presented in this article. In addition to its ability to operate with several tracking approaches to be able to deal with a wide variety of environmental conditions, MixAR system also implements an extended environment feature that provides visitors with an insight on surrounding points-of-interest for visitation during mixed reality experiences (positional rough tracking). A procedural modelling tool mainstreams augmentation models production. Tests carried out with participants to ascertain comfort, satisfaction and presence/immersion based on an in-field MR experience and respective results are also presented. Ease to adapt to the experience, desire to see the system in museums and a raised curiosity and motivation contributed as positive points for evaluation. In what regards to sickness and comfort, the lowest number of complaints seems to be satisfactory. Models' illumination/re-lightning must be addressed in the future to improve the user's engagement with the experiences provided by the MixAR system.


i-Perception ◽  
2017 ◽  
Vol 8 (1) ◽  
pp. 204166951668608 ◽  
Author(s):  
Ling Xia ◽  
Sylvia C. Pont ◽  
Ingrid Heynderick

Humans are able to estimate light field properties in a scene in that they have expectations of the objects’ appearance inside it. Previously, we probed such expectations in a real scene by asking whether a “probe object” fitted a real scene with regard to its lighting. But how well are observers able to interactively adjust the light properties on a “probe object” to its surrounding real scene? Image ambiguities can result in perceptual interactions between light properties. Such interactions formed a major problem for the “readability” of the illumination direction and diffuseness on a matte smooth spherical probe. We found that light direction and diffuseness judgments using a rough sphere as probe were slightly more accurate than when using a smooth sphere, due to the three-dimensional (3D) texture. We here extended the previous work by testing independent and simultaneous (i.e., the light field properties separated one by one or blended together) adjustments of light intensity, direction, and diffuseness using a rough probe. Independently inferred light intensities were close to the veridical values, and the simultaneously inferred light intensity interacted somewhat with the light direction and diffuseness. The independently inferred light directions showed no statistical difference with the simultaneously inferred directions. The light diffuseness inferences correlated with but contracted around medium veridical values. In summary, observers were able to adjust the basic light properties through both independent and simultaneous adjustments. The light intensity, direction, and diffuseness are well “readable” from our rough probe. Our method allows “tuning the light” (adjustment of its spatial distribution) in interfaces for lighting design or perception research.


Author(s):  
L. Bertini ◽  
B. Monelli ◽  
P. Neri ◽  
C. Santus ◽  
A. Guglielmo

This paper shows an automated procedure to experimentally find the eigenmodes of a bladed wheel with highly three-dimensional geometry. The stationary wheel is supported in free-free conditions, neglecting stress-stiffening effects. The single input / multiple output approach was followed. The vibration speed was measured by means of a laser-Doppler vibrometer, and an anthropomorphic robot was used for accurate orientation and positioning of the measuring laser beam, allowing multiple measurements during a limited testing time. The vibration at corresponding points on each blade was measured and the data elaborated in order to find the initial (lower frequency) modes. These modal shapes were then compared to finite element simulations and accurate frequency matching and exact number of nodal diameters obtained. Being the modes cyclically harmonic, the complex formulation could be attractive, being not affected by the angular phase of the mode representation. Nevertheless, stationary modes were experimentally detected, rather than rotating, and then the real representation was necessary. The discrete Fourier transform of the blade displacements easily allowed to find both the angular phase and the correct number of nodal diameters. Successful MAC experimental to analytical comparison was finally obtained with the real representation after introducing the proper angular phase for each mode.


2021 ◽  
Vol 248 ◽  
pp. 02051
Author(s):  
Jiang Wen ◽  
Yang Jin Hu ◽  
Zhai Wei ◽  
Xu Guangbing ◽  
Xiang Xinxin ◽  
...  

In this paper, through the application of 3D design technology in the construction of 220kV Miluoxi substation, the main aspects of the application of 3D design technology in the construction are summarized, including inter discipline verification, collision inspection, real scene modeling, equipment 3D installation details, 4D construction simulation, VR technology application and mobile application solutions. This paper also summarizes the economic, management and social benefits of three-dimensional application, which can be used as a reference for the following projects.


2018 ◽  
Author(s):  
Uri Korisky ◽  
Rony Hirschhorn ◽  
Liad Mudrik

Notice: a peer-reviewed version of this preprint has been published in Behavior Research Methods and is available freely at http://link.springer.com/article/10.3758/s13428-018-1162-0Continuous Flash Suppression (CFS) is a popular method for suppressing visual stimuli from awareness for relatively long periods. Thus far, it has only been used for suppressing two-dimensional images presented on-screen. We present a novel variant of CFS, termed ‘real-life CFS’, with which the actual immediate surroundings of an observer – including three-dimensional, real life objects – can be rendered unconscious. Real-life CFS uses augmented reality goggles to present subjects with CFS masks to their dominant eye, leaving their non-dominant eye exposed to the real world. In three experiments we demonstrate that real objects can indeed be suppressed from awareness using real-life CFS, and that duration suppression is comparable that obtained using the classic, on-screen CFS. We further provide an example for an experimental code, which can be modified for future studies using ‘real-life CFS’. This opens the gate for new questions in the study of consciousness and its functions.


2008 ◽  
Vol 8 (3) ◽  
pp. 8455-8490 ◽  
Author(s):  
K. W. Hoppel ◽  
N. L. Baker ◽  
L. Coy ◽  
S. D. Eckermann ◽  
J. P. McCormack ◽  
...  

Abstract. The forecast model and three-dimensional variational data assimilation components of the Navy Operational Global Atmospheric Prediction System (NOGAPS) have each been extended into the upper stratosphere and mesosphere to form an Advanced Level Physics High Altitude (ALPHA) version of NOGAPS extending to ~100 km. This NOGAPS-ALPHA NWP prototype is used to assimilate stratospheric and mesospheric temperature data from the Microwave Limb Sounder (MLS) and the Sounding of the Atmosphere using Broadband Radiometry (SABER) instruments. A 60-day analysis period in January and February, 2006, was chosen that includes a well documented stratospheric sudden warming. SABER temperatures indicate that the SSW caused the polar winter stratopause at ~40 km to disappear, then reform at ~80 km altitude and slowly descend during February. The NOGAPS-ALPHA analysis reproduces this observed stratospheric and mesospheric temperature structure, as well as realistic evolution of zonal winds, residual velocities, and Eliassen-Palm fluxes that aid interpretation of the vertically deep circulation and eddy flux anomalies that developed in response to this wave-breaking event. The observation minus forecast (O-F) standard deviations for MLS and SABER are ~2 K in the mid-stratosphere and increase monotonically to about 6 K in the upper mesosphere. Increasing O-F standard deviations in the mesosphere are expected due to increasing instrument error and increasing geophysical variance at small spatial scales in the forecast model. In the mid/high latitude winter regions, 10-day forecast skill is improved throughout the upper stratosphere and mesosphere when the model is initialized using the high-altitude analysis based on assimilation of both SABER and MLS data.


Sign in / Sign up

Export Citation Format

Share Document