scholarly journals Multi-view 3D circular target reconstruction with uncertainty analysis

Author(s):  
B. Soheilian ◽  
M. Brédif

The paper presents an algorithm for reconstruction of 3D circle from its apparition in <i>n</i> images. It supposes that camera poses are known up to an uncertainty. They will be considered as observations and will be refined during the reconstruction process. First, circle apparitions will be estimated in every individual image from a set of 2D points using a constrained optimization. Uncertainty of 2D points are propagated in 2D ellipse estimation and leads to covariance matrix of ellipse parameters. In 3D reconstruction process ellipse and camera pose parameters are considered as observations with known covariances. A minimal parametrization of 3D circle enables to model the projection of circle in image without any constraint. The reconstruction is performed by minimizing the length of observation residuals vector in a non linear Gauss-Helmert model. The output consists in parameters of the corresponding circle in 3D and their covariances. The results are presented on simulated data.

Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


2010 ◽  
Vol 1 (1) ◽  
pp. 81 ◽  
Author(s):  
Rudy Ercek ◽  
Didier Viviers ◽  
Nadine Warzée

<p>The city of Itanos is situated in the North-East of Crete. Between 1994 and 2005, the French School of Archaeology at Athens (Efa) and the Center for Mediterranean Studies in Rethymnon carried out excavation campaigns during which a necropolis and an Archaic building have been explored by a team of the CReA. A very close collaboration between archeologists, engineers and computer graphic designers allowed the 3D reconstruction of these remains. The archeologist was able to directly verify his hypotheses during the reconstruction process. In summer 2007 and 2008, a 3D digitalization of Itanos was made in order to insert the 3D reconstructions into the actual landscape.</p>


2002 ◽  
Author(s):  
BART G VAN BLOEMEN WAANDERS ◽  
ROSCOE A BARTLETT ◽  
KEVIN R LONG ◽  
PAUL T BOGGS ◽  
ANDREW G SALINGER

Author(s):  
Suryaefiza Karjanto ◽  
Norazan Mohamed Ramli ◽  
Nor Azura Md Ghaninor Azura Md Ghani

<p class="lead">The relationship between genes in gene set analysis in microarray data is analyzed using Hotelling’s <em>T</em><sup>2</sup> but the test cannot be applied when the number of samples is larger than the number of variables which is uncommon in the microarray. Thus, in this study, we proposed shrinkage approaches to estimating the covariance matrix in Hotelling’s <em>T<sup>2</sup></em> particularly to cater high dimensionality problem in microarray data. Three shrinkage covariance methods were proposed in this study and are referred as Shrink A, Shrink B and Shrink C. The analysis of the three proposed shrinkage methods was compared with the Regularized Covariance Matrix Approach and Kong’s Principal Component Analysis. The performances of the proposed methods were assessed using several cases of simulated data sets. In many cases, the Shrink A method performed the best, followed by the Shrink C and RCMAT methods. In contrast, both the Shrink B and KPCA methods showed relatively poor results. The study contributes to an establishment of modified multivariate approach to differential gene expression analysis and expected to be applied in other areas with similar data characteristics.</p>


2020 ◽  
Vol 492 (4) ◽  
pp. 5023-5029 ◽  
Author(s):  
Niall Jeffrey ◽  
François Lanusse ◽  
Ofer Lahav ◽  
Jean-Luc Starck

ABSTRACT We present the first reconstruction of dark matter maps from weak lensing observational data using deep learning. We train a convolution neural network with a U-Net-based architecture on over 3.6 × 105 simulated data realizations with non-Gaussian shape noise and with cosmological parameters varying over a broad prior distribution. We interpret our newly created dark energy survey science verification (DES SV) map as an approximation of the posterior mean P(κ|γ) of the convergence given observed shear. Our DeepMass1 method is substantially more accurate than existing mass-mapping methods. With a validation set of 8000 simulated DES SV data realizations, compared to Wiener filtering with a fixed power spectrum, the DeepMass method improved the mean square error (MSE) by 11 per cent. With N-body simulated MICE mock data, we show that Wiener filtering, with the optimal known power spectrum, still gives a worse MSE than our generalized method with no input cosmological parameters; we show that the improvement is driven by the non-linear structures in the convergence. With higher galaxy density in future weak lensing data unveiling more non-linear scales, it is likely that deep learning will be a leading approach for mass mapping with Euclid and LSST.


2003 ◽  
Vol 56 (1) ◽  
pp. 79-88 ◽  
Author(s):  
Michael Moore ◽  
Jinling Wang

The main problems faced by a dynamic model within a Kalman filter occur when the system experiences unexpected dynamic conditions, a change in data acquisition rate, or when the dynamics of the system are non-linear. To minimize the errors produced from dynamic modelling in unusual conditions, an extended dynamic model is developed in this paper, and its usefulness demonstrated through comparison of the performance of a Kalman filter's response to simulated data with a standard dynamic model and the extended dynamic model. The results show that, in use, the proposed extended dynamic model is superior to a standard dynamic model, due mainly to its ability to adapt to a wider range of dynamic conditions, which in turn ensures the optimization of the Kalman filter and the consequent generation of reliable positioning results.


Sign in / Sign up

Export Citation Format

Share Document