scholarly journals Modeling ECM fiber formation: structure information extracted by analysis of 2D and 3D image sets

Author(s):  
Jun Wu ◽  
Sherry L. Voytik-Harbin ◽  
David L. Filmer ◽  
Christoph M. Hoffman ◽  
Bo Yuan ◽  
...  
2014 ◽  
Vol 75 (S 02) ◽  
Author(s):  
Gerlig Widmann ◽  
P. Schullian ◽  
R. Hoermann ◽  
E. Gassner ◽  
H. Riechelmann ◽  
...  

2019 ◽  
Vol 11 (19) ◽  
pp. 2243 ◽  
Author(s):  
Weiquan Liu ◽  
Cheng Wang ◽  
Xuesheng Bian ◽  
Shuting Chen ◽  
Wei Li ◽  
...  

Establishing the spatial relationship between 2D images captured by real cameras and 3D models of the environment (2D and 3D space) is one way to achieve the virtual–real registration for Augmented Reality (AR) in outdoor environments. In this paper, we propose to match the 2D images captured by real cameras and the rendered images from the 3D image-based point cloud to indirectly establish the spatial relationship between 2D and 3D space. We call these two kinds of images as cross-domain images, because their imaging mechanisms and nature are quite different. However, unlike real camera images, the rendered images from the 3D image-based point cloud are inevitably contaminated with image distortion, blurred resolution, and obstructions, which makes image matching with the handcrafted descriptors or existing feature learning neural networks very challenging. Thus, we first propose a novel end-to-end network, AE-GAN-Net, consisting of two AutoEncoders (AEs) with Generative Adversarial Network (GAN) embedding, to learn invariant feature descriptors for cross-domain image matching. Second, a domain-consistent loss function, which balances image content and consistency of feature descriptors for cross-domain image pairs, is introduced to optimize AE-GAN-Net. AE-GAN-Net effectively captures domain-specific information, which is embedded into the learned feature descriptors, thus making the learned feature descriptors robust against image distortion, variations in viewpoints, spatial resolutions, rotation, and scaling. Experimental results show that AE-GAN-Net achieves state-of-the-art performance for image patch retrieval with the cross-domain image patch dataset, which is built from real camera images and the rendered images from 3D image-based point cloud. Finally, by evaluating virtual–real registration for AR on a campus by using the cross-domain image matching results, we demonstrate the feasibility of applying the proposed virtual–real registration to AR in outdoor environments.


Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Peixian Liang ◽  
Zhuo Zhao ◽  
...  

3D image segmentation plays an important role in biomedical image analysis. Many 2D and 3D deep learning models have achieved state-of-the-art segmentation performance on 3D biomedical image datasets. Yet, 2D and 3D models have their own strengths and weaknesses, and by unifying them together, one may be able to achieve more accurate results. In this paper, we propose a new ensemble learning framework for 3D biomedical image segmentation that combines the merits of 2D and 3D models. First, we develop a fully convolutional network based meta-learner to learn how to improve the results from 2D and 3D models (base-learners). Then, to minimize over-fitting for our sophisticated meta-learner, we devise a new training method that uses the results of the baselearners as multiple versions of “ground truths”. Furthermore, since our new meta-learner training scheme does not depend on manual annotation, it can utilize abundant unlabeled 3D image data to further improve the model. Extensive experiments on two public datasets (the HVSMR 2016 Challenge dataset and the mouse piriform cortex dataset) show that our approach is effective under fully-supervised, semisupervised, and transductive settings, and attains superior performance over state-of-the-art image segmentation methods.


2020 ◽  
Vol 480 ◽  
pp. 229101
Author(s):  
Hongyi Xu ◽  
Francois Usseglio-Viretta ◽  
Steven Kench ◽  
Samuel J. Cooper ◽  
Donal P. Finegan

Sign in / Sign up

Export Citation Format

Share Document