scholarly journals GENERALIZATION OF THE DESARGUES THEOREM FOR SPARSE 3D RECONSTRUCTION

2009 ◽  
Vol 06 (01) ◽  
pp. 49-69
Author(s):  
VINCENT FREMONT ◽  
RYAD CHELLALI ◽  
JEAN-GUY FONTAINE

Visual perception for walking machines needs to handle more degrees of freedom than for wheeled robots. For humanoid, four- or six-legged robots, camera motion is 6D instead of 3D or planar motion. Classical 3D reconstruction methods cannot be applied directly, because explicit sensor motion is needed. In this paper, we propose an algorithm for 3D reconstruction of an unstructured environment using motion-free uncalibrated single camera. Computer vision techniques are employed to obtain an incremental geometrical reconstruction of the environment, therefore using vision as a sensor for robot control tasks like navigation, obstacle avoidance, manipulation, tracking, etc. and 3D model acquisition. The main contribution is that the offline 3D reconstruction problem is considered as a point trajectory search through the video stream. The algorithm takes into account the temporal aspect of the sequence of images in order to have an analytical expression of the geometrical locus of the point trajectories through the sequence of images. The approach is a generalization of the Desargues theorem applied to multiple views taken from nearby viewpoints. Experiments on both synthetic and real image sequences show the simplicity and efficiency of the proposed method. This method provides an alternative technical solution easy to use, flexible in the context of robotic applications and can significantly improve the 3D estimation accuracy.

Author(s):  
Adriana Verschoor ◽  
Ronald Milligan ◽  
Suman Srivastava ◽  
Joachim Frank

We have studied the eukaryotic ribosome from two vertebrate species (rabbit reticulocyte and chick embryo ribosomes) in several different electron microscopic preparations (Fig. 1a-d), and we have applied image processing methods to two of the types of images. Reticulocyte ribosomes were examined in both negative stain (0.5% uranyl acetate, in a double-carbon preparation) and frozen hydrated preparation as single-particle specimens. In addition, chick embryo ribosomes in tetrameric and crystalline assemblies in frozen hydrated preparation have been examined. 2D averaging, multivariate statistical analysis, and classification methods have been applied to the negatively stained single-particle micrographs and the frozen hydrated tetramer micrographs to obtain statistically well defined projection images of the ribosome (Fig. 2a,c). 3D reconstruction methods, the random conical reconstruction scheme and weighted back projection, were applied to the negative-stain data, and several closely related reconstructions were obtained. The principal 3D reconstruction (Fig. 2b), which has a resolution of 3.7 nm according to the differential phase residual criterion, can be compared to the images of individual ribosomes in a 2D tetramer average (Fig. 2c) at a similar resolution, and a good agreement of the general morphology and of many of the characteristic features is seen.Both data sets show the ribosome in roughly the same ’view’ or orientation, with respect to the adsorptive surface in the electron microscopic preparation, as judged by the agreement in both the projected form and the distribution of characteristic density features. The negative-stain reconstruction reveals details of the ribosome morphology; the 2D frozen-hydrated average provides projection information on the native mass-density distribution within the structure. The 40S subunit appears to have an elongate core of higher density, while the 60S subunit shows a more complex pattern of dense features, comprising a rather globular core, locally extending close to the particle surface.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 391
Author(s):  
Luca Bigazzi ◽  
Stefano Gherardini ◽  
Giacomo Innocenti ◽  
Michele Basso

In this paper, solutions for precise maneuvering of an autonomous small (e.g., 350-class) Unmanned Aerial Vehicles (UAVs) are designed and implemented from smart modifications of non expensive mass market technologies. The considered class of vehicles suffers from light load, and, therefore, only a limited amount of sensors and computing devices can be installed on-board. Then, to make the prototype capable of moving autonomously along a fixed trajectory, a “cyber-pilot”, able on demand to replace the human operator, has been implemented on an embedded control board. This cyber-pilot overrides the commands thanks to a custom hardware signal mixer. The drone is able to localize itself in the environment without ground assistance by using a camera possibly mounted on a 3 Degrees Of Freedom (DOF) gimbal suspension. A computer vision system elaborates the video stream pointing out land markers with known absolute position and orientation. This information is fused with accelerations from a 6-DOF Inertial Measurement Unit (IMU) to generate a “virtual sensor” which provides refined estimates of the pose, the absolute position, the speed and the angular velocities of the drone. Due to the importance of this sensor, several fusion strategies have been investigated. The resulting data are, finally, fed to a control algorithm featuring a number of uncoupled digital PID controllers which work to bring to zero the displacement from the desired trajectory.


Author(s):  
Gilles Simon

It is generally accepted that Jan van Eyck was unaware of perspective. However, an a-contrario analysis of the vanishing points in five of his paintings, realized between 1432 and 1439, unveils a recurring fishbone-like pattern that could only emerge from the use of a polyscopic perspective machine with two degrees of freedom. A 3D reconstruction of Arnolfini Portrait compliant with this pattern suggests that van Eyck's device answered a both aesthetic and scientific questioning on how to represent space as closely as possible to human vision. This discovery makes van Eyck the father of today's immersive and nomadic creative media such as augmented reality and synthetic holography.


2017 ◽  
Vol 79 ◽  
pp. 49-58 ◽  
Author(s):  
P. Rodríguez-Gonzálvez ◽  
M. Rodríguez-Martín ◽  
Luís F. Ramos ◽  
D. González-Aguilera

2021 ◽  
Vol 13 (21) ◽  
pp. 4434
Author(s):  
Chunhui Zhao ◽  
Chi Zhang ◽  
Yiming Yan ◽  
Nan Su

A novel framework for 3D reconstruction of buildings based on a single off-nadir satellite image is proposed in this paper. Compared with the traditional methods of reconstruction using multiple images in remote sensing, recovering 3D information that utilizes the single image can reduce the demands of reconstruction tasks from the perspective of input data. It solves the problem that multiple images suitable for traditional reconstruction methods cannot be acquired in some regions, where remote sensing resources are scarce. However, it is difficult to reconstruct a 3D model containing a complete shape and accurate scale from a single image. The geometric constraints are not sufficient as the view-angle, size of buildings, and spatial resolution of images are different among remote sensing images. To solve this problem, the reconstruction framework proposed consists of two convolutional neural networks: Scale-Occupancy-Network (Scale-ONet) and model scale optimization network (Optim-Net). Through reconstruction using the single off-nadir satellite image, Scale-Onet can generate water-tight mesh models with the exact shape and rough scale of buildings. Meanwhile, the Optim-Net can reduce the error of scale for these mesh models. Finally, the complete reconstructed scene is recovered by Model-Image matching. Profiting from well-designed networks, our framework has good robustness for different input images, with different view-angle, size of buildings, and spatial resolution. Experimental results show that an ideal reconstruction accuracy can be obtained both on the model shape and scale of buildings.


2007 ◽  
Vol 94 (8) ◽  
pp. 623-630 ◽  
Author(s):  
Hanns-Christian Gunga ◽  
Tim Suthau ◽  
Anke Bellmann ◽  
Andreas Friedrich ◽  
Thomas Schwanebeck ◽  
...  

Author(s):  
D. Chaikalis ◽  
G. Passalis ◽  
N. Sgouros ◽  
D. Maroulis ◽  
T. Theoharis

2018 ◽  
Vol 06 (02) ◽  
pp. E205-E210 ◽  
Author(s):  
Anastasios Koulaouzidis ◽  
Dimitris Iakovidis ◽  
Diana Yung ◽  
Evangelos Mazomenos ◽  
Federico Bianchi ◽  
...  

Abstract Background and study aims Capsule endoscopy (CE) is invaluable for minimally invasive endoscopy of the gastrointestinal tract; however, several technological limitations remain including lack of reliable lesion localization. We present an approach to 3D reconstruction and localization using visual information from 2D CE images. Patients and methods Colored thumbtacks were secured in rows to the internal wall of a LifeLike bowel model. A PillCam SB3 was calibrated and navigated linearly through the lumen by a high-precision robotic arm. The motion estimation algorithm used data (light falling on the object, fraction of reflected light and surface geometry) from 2D CE images in the video sequence to achieve 3D reconstruction of the bowel model at various frames. The ORB-SLAM technique was used for 3D reconstruction and CE localization within the reconstructed model. This algorithm compared pairs of points between images for reconstruction and localization. Results As the capsule moved through the model bowel 42 to 66 video frames were obtained per pass. Mean absolute error in the estimated distance travelled by the CE was 4.1 ± 3.9 cm. Our algorithm was able to reconstruct the cylindrical shape of the model bowel with details of the attached thumbtacks. ORB-SLAM successfully reconstructed the bowel wall from simultaneous frames of the CE video. The “track” in the reconstruction corresponded well with the linear forwards-backwards movement of the capsule through the model lumen. Conclusion The reconstruction methods, detailed above, were able to achieve good quality reconstruction of the bowel model and localization of the capsule trajectory using information from the CE video and images alone.


Sign in / Sign up

Export Citation Format

Share Document