scholarly journals Dense cellular segmentation for EM using 2D–3D neural network ensembles

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Matthew D. Guay ◽  
Zeyad A. S. Emam ◽  
Adam B. Anderson ◽  
Maria A. Aronova ◽  
Irina D. Pokrovskaya ◽  
...  

AbstractBiologists who use electron microscopy (EM) images to build nanoscale 3D models of whole cells and their organelles have historically been limited to small numbers of cells and cellular features due to constraints in imaging and analysis. This has been a major factor limiting insight into the complex variability of cellular environments. Modern EM can produce gigavoxel image volumes containing large numbers of cells, but accurate manual segmentation of image features is slow and limits the creation of cell models. Segmentation algorithms based on convolutional neural networks can process large volumes quickly, but achieving EM task accuracy goals often challenges current techniques. Here, we define dense cellular segmentation as a multiclass semantic segmentation task for modeling cells and large numbers of their organelles, and give an example in human blood platelets. We present an algorithm using novel hybrid 2D–3D segmentation networks to produce dense cellular segmentations with accuracy levels that outperform baseline methods and approach those of human annotators. To our knowledge, this work represents the first published approach to automating the creation of cell models with this level of structural detail.

2021 ◽  
Vol 11 (10) ◽  
pp. 4554
Author(s):  
João F. Teixeira ◽  
Mariana Dias ◽  
Eva Batista ◽  
Joana Costa ◽  
Luís F. Teixeira ◽  
...  

The scarcity of balanced and annotated datasets has been a recurring problem in medical image analysis. Several researchers have tried to fill this gap employing dataset synthesis with adversarial networks (GANs). Breast magnetic resonance imaging (MRI) provides complex, texture-rich medical images, with the same annotation shortage issues, for which, to the best of our knowledge, no previous work tried synthesizing data. Within this context, our work addresses the problem of synthesizing breast MRI images from corresponding annotations and evaluate the impact of this data augmentation strategy on a semantic segmentation task. We explored variations of image-to-image translation using conditional GANs, namely fitting the generator’s architecture with residual blocks and experimenting with cycle consistency approaches. We studied the impact of these changes on visual verisimilarity and how an U-Net segmentation model is affected by the usage of synthetic data. We achieved sufficiently realistic-looking breast MRI images and maintained a stable segmentation score even when completely replacing the dataset with the synthetic set. Our results were promising, especially when concerning to Pix2PixHD and Residual CycleGAN architectures.


Author(s):  
G. Kontogianni ◽  
A. Georgopoulos

Digital technologies have affected significantly many fields of computer graphics such as Games and especially the field of the Serious Games. These games are usually used for educational proposes in many fields such as Health Care, Military applications, Education, Government etc. Especially Digital Cultural Heritage is a scientific area that Serious Games are applied and lately many applications appear in the related literature. Realistic 3D textured models which have been produced using different photogrammetric methods could be a useful tool for the creation of Serious Game applications in order to make the final result more realistic and close to the reality. The basic goal of this paper is how 3D textured models which are produced by photogrammetric methods can be useful for developing a more realistic environment of a Serious Game. The application of this project aims at the creation of an educational game for the Ancient Agora of Athens. The 3D models used vary not only as far as their production methods (i.e. Time of Flight laser scanner, Structure from Motion, Virtual historical reconstruction etc.) is concerned, but also as far as their era as some of them illustrated according to their existing situation and some others according to how these monuments looked like in the past. The Unity 3D<sup>®</sup> game developing environment was used for creating this application, in which all these models were inserted in the same file format. For the application two diachronic virtual tours of the Athenian Agora were produced. The first one illustrates the Agora as it is today and the second one at the 2nd century A.D. Finally the future perspective for the evolution of this game is presented which includes the addition of some questions that the user will be able to answer. Finally an evaluation is scheduled to be performed at the end of the project.


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 98
Author(s):  
Valeria Croce ◽  
Gabriella Caroti ◽  
Andrea Piemonte ◽  
Marco Giorgio Bevilacqua

The digitization of Cultural Heritage paves the way for new approaches to surveying and restitution of historical sites. With a view to the management of integrated programs of documentation and conservation, the research is now focusing on the creation of information systems where to link the digital representation of a building to semantic knowledge. With reference to the emblematic case study of the Calci Charterhouse, also known as Pisa Charterhouse, this contribution illustrates an approach to be followed in the transition from 3D survey information, derived from laser scanner and photogrammetric techniques, to the creation of semantically enriched 3D models. The proposed approach is based on the recognition -segmentation and classification- of elements on the original raw point cloud, and on the manual mapping of NURBS elements on it. For this shape recognition process, reference to architectural treatises and vocabularies of classical architecture is a key step. The created building components are finally imported in a H-BIM environment, where they are enriched with semantic information related to historical knowledge, documentary sources and restoration activities.


Spatium ◽  
2016 ◽  
pp. 30-36 ◽  
Author(s):  
Petar Pejic ◽  
Sonja Krasic

Digital three-dimensional models of the existing architectonic structures are created for the purpose of digitalization of the archive documents, presentation of buildings or an urban entity or for conducting various analyses and tests. Traditional methods for the creation of 3D models of the existing buildings assume manual measuring of their dimensions, using the photogrammetry method or laser scanning. Such approaches require considerable time spent in data acquisition or application of specific instruments and equipment. The goal of this paper is presentation of the procedure for the creation of 3D models of the existing structures using the globally available web resources and free software packages on standard PCs. This shortens the time of the production of a digital three-dimensional model of the structure considerably and excludes the physical presence at the location. In addition, precision of this method was tested and compared with the results acquired in a previous research.


Author(s):  
Roberto Cipolla ◽  
Kwan-Yee K. Wong

This chapter discusses profiles or outlines which are dominant features of images. Profiles can be extracted easily and reliably from the images and can provide information on the shape and motion of an object. Classical techniques for motion estimation and model reconstruction are highly dependent on point and line correspondences, hence they cannot be applied directly to profiles which are viewpoint dependent. The limitations of classical techniques paved the way for the creation of different sets of algorithms specific to profiles. In this chapter, the focus is on state-of-the-art algorithms for model reconstruction and model estimation from profiles. These new sets of algorithms are capable of reconstructing any kind of objects including smooth and textureless surfaces. They also render convincing 3D models, reinforcing the practicality of the algorithm.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5765 ◽  
Author(s):  
Seiya Ito ◽  
Naoshi Kaneko ◽  
Kazuhiko Sumi

This paper proposes a novel 3D representation, namely, a latent 3D volume, for joint depth estimation and semantic segmentation. Most previous studies encoded an input scene (typically given as a 2D image) into a set of feature vectors arranged over a 2D plane. However, considering the real world is three-dimensional, this 2D arrangement reduces one dimension and may limit the capacity of feature representation. In contrast, we examine the idea of arranging the feature vectors in 3D space rather than in a 2D plane. We refer to this 3D volumetric arrangement as a latent 3D volume. We will show that the latent 3D volume is beneficial to the tasks of depth estimation and semantic segmentation because these tasks require an understanding of the 3D structure of the scene. Our network first constructs an initial 3D volume using image features and then generates latent 3D volume by passing the initial 3D volume through several 3D convolutional layers. We apply depth regression and semantic segmentation by projecting the latent 3D volume onto a 2D plane. The evaluation results show that our method outperforms previous approaches on the NYU Depth v2 dataset.


2019 ◽  
Vol 9 (13) ◽  
pp. 2686 ◽  
Author(s):  
Jianming Zhang ◽  
Chaoquan Lu ◽  
Jin Wang ◽  
Lei Wang ◽  
Xiao-Guang Yue

In civil engineering, the stability of concrete is of great significance to safety of people’s life and property, so it is necessary to detect concrete damage effectively. In this paper, we treat crack detection on concrete surface as a semantic segmentation task that distinguishes background from crack at the pixel level. Inspired by Fully Convolutional Networks (FCN), we propose a full convolution network based on dilated convolution for concrete crack detection, which consists of an encoder and a decoder. Specifically, we first used the residual network to extract the feature maps of the input image, designed the dilated convolutions with different dilation rates to extract the feature maps of different receptive fields, and fused the extracted features from multiple branches. Then, we exploited the stacked deconvolution to do up-sampling operator in the fused feature maps. Finally, we used the SoftMax function to classify the feature maps at the pixel level. In order to verify the validity of the model, we introduced the commonly used evaluation indicators of semantic segmentation: Pixel Accuracy (PA), Mean Pixel Accuracy (MPA), Mean Intersection over Union (MIoU), and Frequency Weighted Intersection over Union (FWIoU). The experimental results show that the proposed model converges faster and has better generalization performance on the test set by introducing dilated convolutions with different dilation rates and a multi-branch fusion strategy. Our model has a PA of 96.84%, MPA of 92.55%, MIoU of 86.05% and FWIoU of 94.22% on the test set, which is superior to other models.


Author(s):  
Mahmud Dwi Sulistiyo ◽  
Yasutomo Kawanishi ◽  
Daisuke Deguchi ◽  
Ichiro Ide ◽  
Takatsugu Hirayama ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document