scholarly journals Scale Accuracy Evaluation of Image-Based 3D Reconstruction Strategies Using Laser Photogrammetry

2019 ◽  
Vol 11 (18) ◽  
pp. 2093 ◽  
Author(s):  
Klemen Istenič ◽  
Nuno Gracias ◽  
Aurélien Arnaubec ◽  
Javier Escartín ◽  
Rafael Garcia

Rapid developments in the field of underwater photogrammetry have given scientists the ability to produce accurate 3D models which are now increasingly used in the representation and study of local areas of interest. This paper addresses the lack of systematic analysis of 3D reconstruction and navigation fusion strategies, as well as associated error evaluation of models produced at larger scales in GPS-denied environments using a monocular camera (often in deep sea scenarios). Based on our prior work on automatic scale estimation of SfM-based 3D models using laser scalers, an automatic scale accuracy framework is presented. The confidence level for each of the scale error estimates is independently assessed through the propagation of the uncertainties associated with image features and laser spot detections using a Monte Carlo simulation. The number of iterations used in the simulation was validated through the analysis of the final estimate behavior. To facilitate the detection and uncertainty estimation of even greatly attenuated laser beams, an automatic laser spot detection method was developed, with the main novelty of estimating the uncertainties based on the recovered characteristic shapes of laser spots with radially decreasing intensities. The effects of four different reconstruction strategies resulting from the combinations of Incremental/Global SfM, and the a priori and a posteriori use of navigation data were analyzed using two distinct survey scenarios captured during the SUBSAINTES 2017 cruise (doi: 10.17600/17001000). The study demonstrates that surveys with multiple overlaps of nonsequential images result in a nearly identical solution regardless of the strategy (SfM or navigation fusion), while surveys with weakly connected sequentially acquired images are prone to produce broad-scale deformation (doming effect) when navigation is not included in the optimization. Thus the scenarios with complex survey patterns substantially benefit from using multiobjective BA navigation fusion. The errors in models, produced by the most appropriate strategy, were estimated at around 1 % in the central parts and always inferior to 5 % on the extremities. The effects of combining data from multiple surveys were also evaluated. The introduction of additional vectors in the optimization of multisurvey problems successfully accounted for offset changes present in the underwater USBL-based navigation data, and thus minimize the effect of contradicting navigation priors. Our results also illustrate the importance of collecting a multitude of evaluation data at different locations and moments during the survey.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Matthew D. Guay ◽  
Zeyad A. S. Emam ◽  
Adam B. Anderson ◽  
Maria A. Aronova ◽  
Irina D. Pokrovskaya ◽  
...  

AbstractBiologists who use electron microscopy (EM) images to build nanoscale 3D models of whole cells and their organelles have historically been limited to small numbers of cells and cellular features due to constraints in imaging and analysis. This has been a major factor limiting insight into the complex variability of cellular environments. Modern EM can produce gigavoxel image volumes containing large numbers of cells, but accurate manual segmentation of image features is slow and limits the creation of cell models. Segmentation algorithms based on convolutional neural networks can process large volumes quickly, but achieving EM task accuracy goals often challenges current techniques. Here, we define dense cellular segmentation as a multiclass semantic segmentation task for modeling cells and large numbers of their organelles, and give an example in human blood platelets. We present an algorithm using novel hybrid 2D–3D segmentation networks to produce dense cellular segmentations with accuracy levels that outperform baseline methods and approach those of human annotators. To our knowledge, this work represents the first published approach to automating the creation of cell models with this level of structural detail.


2021 ◽  
Vol 10 (11) ◽  
pp. 761
Author(s):  
Tengfei Yu ◽  
He Huang ◽  
Nana Jiang ◽  
Tri Dev Acharya

High-definition maps (HDM) for autonomous driving (AD) are an important component of AD systems. HDMs accurately provide a priori information, including lane lines, and road signs, for AD systems. It is an important task to make a reasonable accuracy assessment of the HDM. The current methods for relative accuracy evaluation of general maps in the field of mapping are not fully applicable to HDMs. In this study, a method based on point set alignment and resampling is used to evaluate the relative accuracy of lane lines, and experiments are conducted based on relevant real HDM data. The results show that the relative accuracy of the lane lines is more detailed and relevant than the traditional method. This has implications for the quality control of HDM production.


2020 ◽  
Vol 11 (23) ◽  
pp. 106
Author(s):  
Damiano Aiello ◽  
Cecilia Bolognesi

<p class="VARAbstract">Can we preserve cultural heritage and, consequently, the memory of the past? To answer this question, one should look at the digital revolution that the world has gone through in recent decades and analyse the complex and the dialectical relationship between cultural heritage and new technologies. Thanks to these, increasingly accurate reconstructions of archaeological sites and historical monuments are possible. The resulting digital replicas are fundamental to experience and understand cultural heritage in innovative ways: they have complex and dynamic relationships with the original objects. This research paper highlights the importance and the scientific validity of digital replicas aimed at understanding, enhancing and protecting cultural heritage. The study focuses on the virtual reconstruction of the constructive phases, from the mid-15<sup>th</sup> century to date, of one of the most emblematic Gothic-Renaissance buildings in the city of Milan (Italy): the convent of Santa Maria delle Grazie, famous worldwide for hosting Leonardo da Vinci's Last Supper painting. This site proved to be an ideal case study because of its troubled and little-known history that led to numerous changes over the centuries. Thanks to a methodological approach based on the analysis of the documentary sources and three-dimensional (3D) modelling, it was possible to outline the chronological succession of the convent transformations; the way in which these overlapped the pre-existing structures was described starting from the Renaissance harmonious and organic interventions, to finally reach 18<sup>th</sup>-19<sup>th </sup>centuries inhomogeneous and incompatible additions. Finally, the research was completed by mapping the 3D models based on the sources used and their different levels of accuracy. The 3D models have thus become a valid tool for checking and verifying the reconstruction hypotheses.</p><p class="VARAbstract">Highlights:</p><ul><li><p>The study focused on the virtual reconstruction of the convent of Santa Maria delle Grazie, one of the most emblematicGothic-Renaissance buildings in the city of Milan.</p></li><li><p>By combining data from documentary sources, architectural treatises, period photos and digital survey, the mainbuilding phases of the convent, from the 15th century to date, were digitally reconstructed.</p></li><li><p>The 3D models are enriched with information about the accuracy of the digital reconstruction, creating 3D databasesthat can be easily consulted and updated.</p></li></ul>


2019 ◽  
Vol 15 (3) ◽  
pp. 6-25 ◽  
Author(s):  
Krzysztof T. Konecki

I would like to present the possibility of broadening the traditional methodological and technical skills of researcher and analyst, but also the intellectual capacity of the researcher associated with combining data, categorizing, linking categories, as well as the interpretation of the causes and consequences of the emergence of certain social phenomena. Some methodologies, methods, and research techniques are more conducive to creative conceptual and interpretive solutions. Therefore, I describe the serendipity phenomenon in such methodologies as grounded theory, ethnography, phenomenological research, and contemplative inquiry. The problem of intuition in qualitative research will be also described in the paper. There will be presented also some suggestions how to be creative in qualitative research. From the review of issues of creativity in qualitative research we can derive the following conclusions: Creativity in qualitative research depends on the strength of a priori conceptualization and stiffness of the adapted methods of research and analysis. If the methodology is more flexible (as the methodology of grounded theory), the researcher can get to phenomena that he/she has not realized and which are still scantily explored in his/her field of expertise. The phenomenological and contemplative approaches allow the use of the investigator’s feelings and experience as they appear in the studied phenomena, which usually does not take place in objectifying and positivistic research. The investigator may therefore consciously use these methodologies and approaches that foster creativity. The researchers can improve their skills in thinking and creative action by doing some methodical exercises (journal writing, writing poetry as a summary of the collected data, the use of art as representation of the phenomenon, the use of meditation, observation of the body feelings, humor, etc.).


Author(s):  
Matteo Cristani ◽  
Roberta Cuel

In the current literature of knowledge management and artificial intelligence, several different approaches to the problem have been carried out of developing domain ontologies from scratch. All these approaches deal fundamentally with three problems: (1) providing a collection of general terms describing classes and relations to be employed in the description of the domain itself; (2) organizing the terms into a taxonomy of the classes by the ISA relation; and (3) expressing in an explicit way the constraints that make the ISA pairs meaningful. Though a number of such approaches can be found, no systematic analysis of them exists which can be used to understand the inspiring motivation, the applicability context, and the structure of the approaches. In this paper, we provide a framework for analyzing the existing methodologies that compares them to a set of general criteria. In particular, we obtain a classification based upon the direction of ontology construction; bottom-up are those methodologies that start with some descriptions of the domain and obtain a classification, while top-down ones start with an abstract view of the domain itself, which is given a priori. The resulting classification is useful not only for theoretical purposes but also in the practice of deployment of ontologies in Information Systems, since it provides a framework for choosing the right methodology to be applied in the specific context, depending also on the needs of the application itself.


2019 ◽  
Vol 11 (13) ◽  
pp. 1550 ◽  
Author(s):  
Tobias Koch ◽  
Marco Körner ◽  
Friedrich Fraundorfer

Small-scaled unmanned aerial vehicles (UAVs) emerge as ideal image acquisition platforms due to their high maneuverability even in complex and tightly built environments. The acquired images can be utilized to generate high-quality 3D models using current multi-view stereo approaches. However, the quality of the resulting 3D model highly depends on the preceding flight plan which still requires human expert knowledge, especially in complex urban and hazardous environments. In terms of safe flight plans, practical considerations often define prohibited and restricted airspaces to be accessed with the vehicle. We propose a 3D UAV path planning framework designed for detailed and complete small-scaled 3D reconstructions considering the semantic properties of the environment allowing for user-specified restrictions on the airspace. The generated trajectories account for the desired model resolution and the demands on a successful photogrammetric reconstruction. We exploit semantics from an initial flight to extract the target object and to define restricted and prohibited airspaces which have to be avoided during the path planning process to ensure a safe and short UAV path, while still aiming to maximize the object reconstruction quality. The path planning problem is formulated as an orienteering problem and solved via discrete optimization exploiting submodularity and photogrammetrical relevant heuristics. An evaluation of our method on a customized synthetic scene and on outdoor experiments suggests the real-world capability of our methodology by providing feasible, short and safe flight plans for the generation of detailed 3D reconstruction models.


2008 ◽  
Vol 2008 ◽  
pp. 1-10 ◽  
Author(s):  
G. Bartoli ◽  
G. Menegaz ◽  
M. Lisi ◽  
G. Di Stolfo ◽  
S. Dragoni ◽  
...  

We present an end-to-end system for the automatic measurement of flow-mediated dilation (FMD) and intima-media thickness (IMT) for the assessment of the arterial function. The video sequences are acquired from a B-mode echographic scanner. A spline model (deformable template) is fitted to the data to detect the artery boundaries and track them all along the video sequence. The a priori knowledge about the image features and its content is exploited. Preprocessing is performed to improve both the visual quality of video frames for visual inspection and the performance of the segmentation algorithm without affecting the accuracy of the measurements. The system allows real-time processing as well as a high level of interactivity with the user. This is obtained by a graphical user interface (GUI) enabling the cardiologist to supervise the whole process and to eventually reset the contour extraction at any point in time. The system was validated and the accuracy, reproducibility, and repeatability of the measurements were assessed with extensivein vivoexperiments. Jointly with the user friendliness, low cost, and robustness, this makes the system suitable for both research and daily clinical use.


2021 ◽  
Author(s):  
Vishal Gupta ◽  
Nathan Kallus

Managing large-scale systems often involves simultaneously solving thousands of unrelated stochastic optimization problems, each with limited data. Intuition suggests that one can decouple these unrelated problems and solve them separately without loss of generality. We propose a novel data-pooling algorithm called Shrunken-SAA that disproves this intuition. In particular, we prove that combining data across problems can outperform decoupling, even when there is no a priori structure linking the problems and data are drawn independently. Our approach does not require strong distributional assumptions and applies to constrained, possibly nonconvex, nonsmooth optimization problems such as vehicle-routing, economic lot-sizing, or facility location. We compare and contrast our results to a similar phenomenon in statistics (Stein’s phenomenon), highlighting unique features that arise in the optimization setting that are not present in estimation. We further prove that, as the number of problems grows large, Shrunken-SAA learns if pooling can improve upon decoupling and the optimal amount to pool, even if the average amount of data per problem is fixed and bounded. Importantly, we highlight a simple intuition based on stability that highlights when and why data pooling offers a benefit, elucidating this perhaps surprising phenomenon. This intuition further suggests that data pooling offers the most benefits when there are many problems, each of which has a small amount of relevant data. Finally, we demonstrate the practical benefits of data pooling using real data from a chain of retail drug stores in the context of inventory management. This paper was accepted by Chung Piaw Teo, Special Issue on Data-Driven Prescriptive Analytics.


TecnoLógicas ◽  
2017 ◽  
Vol 20 (39) ◽  
pp. 127-140
Author(s):  
Patrick Sandoz ◽  
July A. Galeano ◽  
Artur Zarzycki ◽  
Deivid Botina ◽  
Fabián Cortés-Mancera ◽  
...  

Vision is a convenient tool for position measurements. In this paper, we present several applications in which a reference pattern can be defined on the target for a priori knowledge of image features and further optimization by software. Selecting pseudoperiodic patterns leads to high resolution in absolute phase measurements. This method is adapted to position encoding of live cell culture boxes. Our goal is to capture each biological image along with its absolute highly accurate position regarding the culture box itself. Thus, it becomes straightforward to find again an already observed region of interest when a culture box is brought back to the microscope stage from the cell incubator where it was temporarily placed for cell culture. In order to evaluate the performance of this method, we tested it during a wound healing assay of human liver tumor-derived cells. In this case, the procedure enabled more accurate measurements of the wound healing rate than the usual method. It was also applied to the characterization of the in-plane vibration amplitude from a tapered probe of a shear force microscope. The amplitude was interpolated by a quartz tuning fork with an attached pseudo-periodic pattern. Nanometer vibration amplitude resolution is achieved by processing the pattern images. Such pictures were recorded by using a common 20x magnification lens.


2021 ◽  
Vol 291 ◽  
pp. 07008
Author(s):  
M.V. Sukharev

The article studies the issues associated with the emergence and widespread distribution of such new systems for the market economy as global digital trading platforms, as well as their impact on economic inequality. The paper proposes the systematic analysis of the organization of these platforms, on the basis of which it is concluded that their main effect is associated with a significant reduction in transaction costs (by one or two orders of magnitude) when searching for goods, making transactions and paying for them. Statistics show an increase in economic inequality, though, a priori we could expect a reduction in inequality as a result of small and medium-sized businesses gaining access to global markets.


Sign in / Sign up

Export Citation Format

Share Document