scholarly journals Image-based surface reconstruction in geomorphometry – merits, limits and developments

2016 ◽  
Vol 4 (2) ◽  
pp. 359-389 ◽  
Author(s):  
Anette Eltner ◽  
Andreas Kaiser ◽  
Carlos Castillo ◽  
Gilles Rock ◽  
Fabian Neugirg ◽  
...  

Abstract. Photogrammetry and geosciences have been closely linked since the late 19th century due to the acquisition of high-quality 3-D data sets of the environment, but it has so far been restricted to a limited range of remote sensing specialists because of the considerable cost of metric systems for the acquisition and treatment of airborne imagery. Today, a wide range of commercial and open-source software tools enable the generation of 3-D and 4-D models of complex geomorphological features by geoscientists and other non-experts users. In addition, very recent rapid developments in unmanned aerial vehicle (UAV) technology allow for the flexible generation of high-quality aerial surveying and ortho-photography at a relatively low cost.The increasing computing capabilities during the last decade, together with the development of high-performance digital sensors and the important software innovations developed by computer-based vision and visual perception research fields, have extended the rigorous processing of stereoscopic image data to a 3-D point cloud generation from a series of non-calibrated images. Structure-from-motion (SfM) workflows are based upon algorithms for efficient and automatic orientation of large image sets without further data acquisition information, examples including robust feature detectors like the scale-invariant feature transform for 2-D imagery. Nevertheless, the importance of carrying out well-established fieldwork strategies, using proper camera settings, ground control points and ground truth for understanding the different sources of errors, still needs to be adapted in the common scientific practice.This review intends not only to summarise the current state of the art on using SfM workflows in geomorphometry but also to give an overview of terms and fields of application. Furthermore, this article aims to quantify already achieved accuracies and used scales, using different strategies in order to evaluate possible stagnations of current developments and to identify key future challenges. It is our belief that some lessons learned from former articles, scientific reports and book chapters concerning the identification of common errors or "bad practices" and some other valuable information may help in guiding the future use of SfM photogrammetry in geosciences.

2015 ◽  
Vol 3 (4) ◽  
pp. 1445-1508 ◽  
Author(s):  
A. Eltner ◽  
A. Kaiser ◽  
C. Castillo ◽  
G. Rock ◽  
F. Neugirg ◽  
...  

Abstract. Photogrammetry and geosciences are closely linked since the late 19th century. Today, a wide range of commercial and open-source software enable non-experts users to obtain high-quality 3-D datasets of the environment, which was formerly reserved to remote sensing experts, geodesists or owners of cost-intensive metric airborne imaging systems. Complex tridimensional geomorphological features can be easily reconstructed from images captured with consumer grade cameras. Furthermore, rapid developments in UAV technology allow for high quality aerial surveying and orthophotography generation at a relatively low-cost. The increasing computing capacities during the last decade, together with the development of high-performance digital sensors and the important software innovations developed by other fields of research (e.g. computer vision and visual perception) has extended the rigorous processing of stereoscopic image data to a 3-D point cloud generation from a series of non-calibrated images. Structure from motion methods offer algorithms, e.g. robust feature detectors like the scale-invariant feature transform for 2-D imagery, which allow for efficient and automatic orientation of large image sets without further data acquisition information. Nevertheless, the importance of carrying out correct fieldwork strategies, using proper camera settings, ground control points and ground truth for understanding the different sources of errors still need to be adapted in the common scientific practice. This review manuscript intends not only to summarize the present state of published research on structure-from-motion photogrammetry applications in geomorphometry, but also to give an overview of terms and fields of application, to quantify already achieved accuracies and used scales using different strategies, to evaluate possible stagnations of current developments and to identify key future challenges. It is our belief that the identification of common errors, "bad practices" and some other valuable information in already published articles, scientific reports and book chapters may help in guiding the future use of SfM photogrammetry in geosciences.


Author(s):  
Caroline Bivik Stadler ◽  
Martin Lindvall ◽  
Claes Lundström ◽  
Anna Bodén ◽  
Karin Lindman ◽  
...  

Abstract Artificial intelligence (AI) holds much promise for enabling highly desired imaging diagnostics improvements. One of the most limiting bottlenecks for the development of useful clinical-grade AI models is the lack of training data. One aspect is the large amount of cases needed and another is the necessity of high-quality ground truth annotation. The aim of the project was to establish and describe the construction of a database with substantial amounts of detail-annotated oncology imaging data from pathology and radiology. A specific objective was to be proactive, that is, to support undefined subsequent AI training across a wide range of tasks, such as detection, quantification, segmentation, and classification, which puts particular focus on the quality and generality of the annotations. The main outcome of this project was the database as such, with a collection of labeled image data from breast, ovary, skin, colon, skeleton, and liver. In addition, this effort also served as an exploration of best practices for further scalability of high-quality image collections, and a main contribution of the study was generic lessons learned regarding how to successfully organize efforts to construct medical imaging databases for AI training, summarized as eight guiding principles covering team, process, and execution aspects.


Author(s):  
S. Blaser ◽  
J. Meyer ◽  
S. Nebiker

Abstract. With this contribution, we describe and publish two high-quality street-level datasets, captured with a portable high-performance Mobile Mapping System (MMS). The datasets will be freely available for scientific use. Both datasets, from a city centre and a forest represent area-wide street-level reality captures which can be used e.g. for establishing cloud-based frameworks for infrastructure management as well as for smart city and forestry applications. The quality of these data sets has been thoroughly evaluated and demonstrated. For example, georeferencing accuracies in the centimetre range using these datasets in combination with image-based georeferencing have been achieved. Both high-quality multi sensor system street-level datasets are suitable for evaluating and improving methods for multiple tasks related to high-precision 3D reality capture and the creation of digital twins. Potential applications range from localization and georeferencing, dense image matching and 3D reconstruction to combined methods such as simultaneous localization and mapping and structure-from-motion as well as classification and scene interpretation. Our dataset is available online at: https://www.fhnw.ch/habg/bimage-datasets


Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 73
Author(s):  
Marina Garcia-Cardosa ◽  
Francisco-Javier Granados-Ortiz ◽  
Joaquín Ortega-Casanova

In recent years, additive manufacturing has gained importance in a wide range of research applications such as medicine, biotechnology, engineering, etc. It has become one of the most innovative and high-performance manufacturing technologies of the moment. This review aims to show and discuss the characteristics of different existing additive manufacturing technologies for the construction of micromixers, which are devices used to mix two or more fluids at microscale. The present manuscript discusses all the choices to be made throughout the printing life cycle of a micromixer in order to achieve a high-quality microdevice. Resolution, precision, materials, and price, amongst other relevant characteristics, are discussed and reviewed in detail for each printing technology. Key information, suggestions, and future prospects are provided for manufacturing of micromixing machines based on the results from this review.


2017 ◽  
Vol 20 (4) ◽  
pp. 1151-1159 ◽  
Author(s):  
Folker Meyer ◽  
Saurabh Bagchi ◽  
Somali Chaterji ◽  
Wolfgang Gerlach ◽  
Ananth Grama ◽  
...  

Abstract As technologies change, MG-RAST is adapting. Newly available software is being included to improve accuracy and performance. As a computational service constantly running large volume scientific workflows, MG-RAST is the right location to perform benchmarking and implement algorithmic or platform improvements, in many cases involving trade-offs between specificity, sensitivity and run-time cost. The work in [Glass EM, Dribinsky Y, Yilmaz P, et al. ISME J 2014;8:1–3] is an example; we use existing well-studied data sets as gold standards representing different environments and different technologies to evaluate any changes to the pipeline. Currently, we use well-understood data sets in MG-RAST as platform for benchmarking. The use of artificial data sets for pipeline performance optimization has not added value, as these data sets are not presenting the same challenges as real-world data sets. In addition, the MG-RAST team welcomes suggestions for improvements of the workflow. We are currently working on versions 4.02 and 4.1, both of which contain significant input from the community and our partners that will enable double barcoding, stronger inferences supported by longer-read technologies, and will increase throughput while maintaining sensitivity by using Diamond and SortMeRNA. On the technical platform side, the MG-RAST team intends to support the Common Workflow Language as a standard to specify bioinformatics workflows, both to facilitate development and efficient high-performance implementation of the community’s data analysis tasks.


2017 ◽  
Author(s):  
◽  
Susan D Shenkin ◽  
Cyril Pernet ◽  
Thomas E Nichols ◽  
Jean-Baptiste Poline ◽  
...  

AbstractBrain imaging is now ubiquitous in clinical practice and research. The case for bringing together large amounts of image data from well-characterised healthy subjects and those with a range of common brain diseases across the life course is now compelling. This report follows a meeting of international experts from multiple disciplines, all interested in brain image biobanking. The meeting included neuroimaging experts (clinical and non-clinical), computer scientists, epidemiologists, clinicians, ethicists, and lawyers involved in creating brain image banks. The meeting followed a structured format to discuss current and emerging brain image banks; applications such as atlases; conceptual and statistical problems (e.g. defining ‘normality’); legal, ethical and technological issues (e.g. consents, potential for data linkage, data security, harmonisation, data storage and enabling of research data sharing). We summarise the lessons learned from the experiences of a wide range of individual image banks, and provide practical recommendations to enhance creation, use and reuse of neuroimaging data. Our aim is to maximise the benefit of the image data, provided voluntarily by research participants and funded by many organisations, for human health. Our ultimate vision is of a federated network of brain image biobanks accessible for large studies of brain structure and function.


2008 ◽  
Vol 2008 (1) ◽  
pp. 407-412 ◽  
Author(s):  
Hans V. Jensen ◽  
Jørn H. S. Andersen ◽  
Per S. Daling ◽  
Elisabeth Nøst

ABSTRACT Introducing regular aerial surveillance in 1981 and near-real time radar satellite detection services in 1992, Norway has obtained a substantial experience in multi sensor oil spill remote sensing. Since 2001 NOFO has been a driving force in the development and utilization of ship-based sensors for short to medium range oil spill detection, supplementing airborne and satellite remote sensing. During the NOFO Oil On Water Exercise in 2006 two satellites, four aircraft, one helicopter and two ships carrying wide range of sensors provided a unique opportunity to assess and compare remote sensing field data synchronized with ground-truth sampling from three sampling MOB-boats. The sampling boats were equipped for doing oil slick thickness measurements and physical-chemical characterization of the surface oil properties. A new vessel-based dispersant application system was field tested executing dispersant treatment of two oil slicks while supported by live infrared video transmitted to the vessel from helicopter. The success of this experiment was documented by extensive monitoring and characterization of the surface oil and the dispersed oil plume during and after the dispersant treatment. This guiding technique, in using aerial forward looking IR-video live transmission from helicopter and remote sensing aircraft, has been practiced later during a recent accidental oil spill on the Norwegian continental shelf. To utilize multiple remote sensors operationally from a response vessel, it is necessary to compare signatures from different sensors in near real time. This paper describes core elements of the remote sensing and ground-truth monitoring during oil on water exercises in recent years, lessons learned and how NOFO will continue developing remote sensing operations related to oil spill combating in reduced visibility and light conditions.


2021 ◽  
Vol 11 (4) ◽  
pp. 1464
Author(s):  
Chang Wook Seo ◽  
Yongduek Seo

There are various challenging issues in automating line art colorization. In this paper, we propose a GAN approach incorporating semantic segmentation image data. Our GAN-based method, named Seg2pix, can automatically generate high quality colorized images, aiming at computerizing one of the most tedious and repetitive jobs performed by coloring workers in the webtoon industry. The network structure of Seg2pix is mostly a modification of the architecture of Pix2pix, which is a convolution-based generative adversarial network for image-to-image translation. Through this method, we can generate high quality colorized images of a particular character with only a few training data. Seg2pix is designed to reproduce a segmented image, which becomes the suggestion data for line art colorization. The segmented image is automatically generated through a generative network with a line art image and a segmentation ground truth. In the next step, this generative network creates a colorized image from the line art and segmented image, which is generated from the former step of the generative network. To summarize, only one line art image is required for testing the generative model, and an original colorized image and segmented image are additionally required as the ground truth for training the model. These generations of the segmented image and colorized image proceed by an end-to-end method sharing the same loss functions. By using this method, we produce better qualitative results for automatic colorization of a particular character’s line art. This improvement can also be measured by quantitative results with Learned Perceptual Image Patch Similarity (LPIPS) comparison. We believe this may help artists exercise their creative expertise mainly in the area where computerization is not yet capable.


Alloy Digest ◽  
1982 ◽  
Vol 31 (11) ◽  

Abstract PRESTO is a high-carbon tool steel containing 1.40% chromium and is moderately deep hardening. It is made to tool-steel-quality standards and was developed to meet the exacting demands of bearing manufacturers for a clean steel of uniform microstructure. It is used in a wide range of bearing applications including high-performance aircraft bearings and other high-load applications where high-quality steel is required. This datasheet provides information on composition, physical properties, hardness, and elasticity. It also includes information on corrosion resistance as well as forming, heat treating, machining, and surface treatment. Filing Code: TS-407. Producer or source: Carpenter.


Sign in / Sign up

Export Citation Format

Share Document