scholarly journals Insight into 3D micro-CT data: exploring segmentation algorithms through performance metrics

2017 ◽  
Vol 24 (5) ◽  
pp. 1065-1077 ◽  
Author(s):  
Talita Perciano ◽  
Daniela Ushizima ◽  
Harinarayan Krishnan ◽  
Dilworth Parkinson ◽  
Natalie Larson ◽  
...  

Three-dimensional (3D) micro-tomography (µ-CT) has proven to be an important imaging modality in industry and scientific domains. Understanding the properties of material structure and behavior has produced many scientific advances. An important component of the 3D µ-CT pipeline is image partitioning (or image segmentation), a step that is used to separate various phases or components in an image. Image partitioning schemes require specific rules for different scientific fields, but a common strategy consists of devising metrics to quantify performance and accuracy. The present article proposes a set of protocols to systematically analyze and compare the results of unsupervised classification methods used for segmentation of synchrotron-based data. The proposed dataflow for Materials Segmentation and Metrics (MSM) provides 3D micro-tomography image segmentation algorithms, such as statistical region merging (SRM),k-means algorithm and parallel Markov random field (PMRF), while offering different metrics to evaluate segmentation quality, confidence and conformity with standards. Both experimental and synthetic data are assessed, illustrating quantitative results through the MSM dashboard, which can return sample information such as media porosity and permeability. The main contributions of this work are: (i) to deliver tools to improve material design and quality control; (ii) to provide datasets for benchmarking and reproducibility; (iii) to yield good practices in the absence of standards or ground-truth for ceramic composite analysis.

Author(s):  
Samuel A. Mihelic ◽  
William A. Sikora ◽  
Ahmed M. Hassan ◽  
Michael R. Williamson ◽  
Theresa A. Jones ◽  
...  

AbstractRecent advances in two-photon microscopy (2PM) have allowed large scale imaging and analysis of cortical blood vessel networks in living mice. However, extracting a network graph and vector representations for vessels remain bottlenecks in many applications. Vascular vectorization is algorithmically difficult because blood vessels have many shapes and sizes, the samples are often unevenly illuminated, and large image volumes are required to achieve good statistical power. State-of-the-art, three-dimensional, vascular vectorization approaches require a segmented/binary image, relying on manual or supervised-machine annotation. Therefore, voxel-by-voxel image segmentation is biased by the human annotator/trainer. Furthermore, segmented images oftentimes require remedial morphological filtering before skeletonization/vectorization. To address these limitations, we propose a vectorization method to extract vascular objects directly from unsegmented images. The Segmentation-Less, Automated, Vascular Vectorization (SLAVV) source code in MATLAB is openly available on GitHub. This novel method uses simple models of vascular anatomy, efficient linear filtering, and low-complexity vector extraction algorithms to remove the image segmentation requirement, replacing it with manual or automated vector classification. SLAVV is demonstrated on three in vivo 2PM image volumes of microvascular networks (capillaries, arterioles and venules) in the mouse cortex. Vectorization performance is proven robust to the choice of plasma- or endothelial-labeled contrast, and processing costs are shown to scale with input image volume. Fully-automated SLAVV performance is evaluated on various, simulated 2PM images based on the large, [1.4, 0.9, 0.6] mm input image, and performance metrics show greater robustness to image quality than an intensity-based thresholding approach.


Author(s):  
Benjamin Gröger ◽  
Daniel Köhler ◽  
Julian Vorderbrüggen ◽  
Juliane Troschitz ◽  
Robert Kupfer ◽  
...  

AbstractRecent developments in automotive and aircraft industry towards a multi-material design pose challenges for modern joining technologies due to different mechanical properties and material compositions of various materials such as composites and metals. Therefore, mechanical joining technologies like clinching are in the focus of current research activities. For multi-material joints of metals and thermoplastic composites thermally assisted clinching processes with advanced tool concepts are well developed. The material-specific properties of fibre-reinforced thermoplastics have a significant influence on the joining process and the resulting material structure in the joining zone. For this reason, it is important to investigate these influences in detail and to understand the phenomena occurring during the joining process. Additionally, this provides the basis for a validation of a numerical simulation of such joining processes. In this paper, the material structure in a joint resulting from a thermally assisted clinching process is investigated. The joining partners are an aluminium sheet and a thermoplastic composite (organo sheet). Using computed tomography enables a three-dimensional investigation that allows a detailed analysis of the phenomena in different joining stages and in the material structure of the finished joint. Consequently, this study provides a more detailed understanding of the material behavior of thermoplastic composites during thermally assisted clinching.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Hidetoshi Urakubo ◽  
Torsten Bullmann ◽  
Yoshiyuki Kubota ◽  
Shigeyuki Oba ◽  
Shin Ishii

AbstractRecently, there has been rapid expansion in the field of micro-connectomics, which targets the three-dimensional (3D) reconstruction of neuronal networks from stacks of two-dimensional (2D) electron microscopy (EM) images. The spatial scale of the 3D reconstruction increases rapidly owing to deep convolutional neural networks (CNNs) that enable automated image segmentation. Several research teams have developed their own software pipelines for CNN-based segmentation. However, the complexity of such pipelines makes their use difficult even for computer experts and impossible for non-experts. In this study, we developed a new software program, called UNI-EM, for 2D and 3D CNN-based segmentation. UNI-EM is a software collection for CNN-based EM image segmentation, including ground truth generation, training, inference, postprocessing, proofreading, and visualization. UNI-EM incorporates a set of 2D CNNs, i.e., U-Net, ResNet, HighwayNet, and DenseNet. We further wrapped flood-filling networks (FFNs) as a representative 3D CNN-based neuron segmentation algorithm. The 2D- and 3D-CNNs are known to demonstrate state-of-the-art level segmentation performance. We then provided two example workflows: mitochondria segmentation using a 2D CNN and neuron segmentation using FFNs. By following these example workflows, users can benefit from CNN-based segmentation without possessing knowledge of Python programming or CNN frameworks.


Author(s):  
PUSHPAJIT A. KHAIRE. ◽  
NILESHSINGH V. THAKUR

Image segmentation is a puzzled problem even after four decades of research. Research on image segmentation is currently conducted in three levels. Development of image segmentation methods, evaluation of segmentation algorithms and performance and study of these evaluation methods. Hundreds of techniques have been proposed for segmentation of natural images, noisy images, medical images etc. Currently most of the researchers are evaluating the segmentation algorithms using ground truth evaluation of (Berkeley segmentation database) BSD images. In this paper an overview of various segmentation algorithms is discussed. The discussion is mainly based on the soft computing approaches used for segmentation of images without noise and noisy images and the parameters used for evaluating these algorithms. Some of these techniques used are Markov Random Field (MRF) model, Neural Network, Clustering, Particle Swarm optimization, Fuzzy Logic approach and different combinations of these soft techniques.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 1098 ◽  
Author(s):  
Gerardo Chacón ◽  
Johel E. Rodríguez ◽  
Valmore Bermúdez ◽  
Miguel Vera ◽  
Juan Diego Hernández ◽  
...  

Background: The multi–slice computerized tomography (MSCT) is a medical imaging modality that has been used to determine the size and location of the stomach cancer. Additionally, MSCT is considered the best modality for the staging of gastric cancer. One way to assess the type 2 cancer of stomach is by detecting the pathological structure with an image segmentation approach. The tumor segmentation of MSCT gastric cancer images enables the diagnosis of the disease condition, for a given patient, without using an invasive method as surgical intervention. Methods: This approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non homogeneities present in the background of MSCT images. Then, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a three–dimensional (3–D) computer graphics procedure based on marching cubes algorithm. In order to validate the segmentations, the Dice score is used as a metric function useful for comparing the segmentations obtained using the proposed method with respect to ground truth volumes traced by a clinician. Results: A total of 8 datasets available for patients diagnosed, from the cancer data collection of the project, Cancer Genome Atlas Stomach Adenocarcinoma (TCGASTAD) is considered in this research. The volume of the type 2 stomach tumor is estimated from the 3–D shape computationally segmented from the each dataset. These 3–D shapes are computationally reconstructed and then used to assess the morphopathology macroscopic features of this cancer. Conclusions: The segmentations obtained are useful for assessing qualitatively and quantitatively the stomach type 2 cancer. In addition, this type of segmentation allows the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.


Author(s):  
Haohang Huang ◽  
Jiayi Luo ◽  
Maziar Moaveni ◽  
Erol Tutumluer ◽  
John M. Hart ◽  
...  

Riprap rock and large-sized aggregates have been used extensively in geotechnical and hydraulic engineering. They essentially provide erosion control, sediment control, and scour protection. The sustainable and reliable use of riprap materials demands efficient and accurate evaluation of their large particle sizes, shapes, and gradation information at both quarry production lines and construction sites. Traditional methods for assessing riprap geometric properties involve subjective visual inspection and time-consuming hand measurements. As such, achieving the comprehensive in-situ characterization of riprap materials still remains challenging for practitioners and engineers. This paper presents an innovative approach for characterizing the volumetric properties of riprap by establishing a field imaging system associated with newly developed color image segmentation and three-dimensional (3-D) reconstruction algorithms. The field imaging system described in this paper with its algorithms and field application examples is designed to be portable, deployable, and affordable for efficient image acquisition. The robustness and accuracy of the image segmentation and 3-D reconstruction algorithms are validated against ground truth measurements collected in stone quarry sites and compared with state-of-the-practice inspection methods. The imaging-based results show good agreement with the ground truth and provide improved volumetric estimation when compared with currently adopted inspection methods. Based on the findings of this study, the innovative imaging-based system is envisioned for full development to provide convenient, reliable, and sustainable solutions for the onsite Quality Assurance/Quality Control tasks relating to riprap rock and large-sized aggregates.


2010 ◽  
Vol 22 (2) ◽  
pp. 511-538 ◽  
Author(s):  
Srinivas C. Turaga ◽  
Joseph F. Murray ◽  
Viren Jain ◽  
Fabian Roth ◽  
Moritz Helmstaedter ◽  
...  

Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions. We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms. In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.


2018 ◽  
Vol 4 (8) ◽  
pp. 98 ◽  
Author(s):  
Simone Bianco ◽  
Gianluigi Ciocca ◽  
Davide Marelli

Structure from Motion (SfM) is a pipeline that allows three-dimensional reconstruction starting from a collection of images. A typical SfM pipeline comprises different processing steps each of which tackles a different problem in the reconstruction pipeline. Each step can exploit different algorithms to solve the problem at hand and thus many different SfM pipelines can be built. How to choose the SfM pipeline best suited for a given task is an important question. In this paper we report a comparison of different state-of-the-art SfM pipelines in terms of their ability to reconstruct different scenes. We also propose an evaluation procedure that stresses the SfM pipelines using real dataset acquired with high-end devices as well as realistic synthetic dataset. To this end, we created a plug-in module for the Blender software to support the creation of synthetic datasets and the evaluation of the SfM pipeline. The use of synthetic data allows us to easily have arbitrarily large and diverse datasets with, in theory, infinitely precise ground truth. Our evaluation procedure considers both the reconstruction errors as well as the estimation errors of the camera poses used in the reconstruction.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kh Tohidul Islam ◽  
Sudanthi Wijewickrema ◽  
Stephen O’Leary

AbstractImage registration is a fundamental task in image analysis in which the transform that moves the coordinate system of one image to another is calculated. Registration of multi-modal medical images has important implications for clinical diagnosis, treatment planning, and image-guided surgery as it provides the means of bringing together complimentary information obtained from different image modalities. However, since different image modalities have different properties due to their different acquisition methods, it remains a challenging task to find a fast and accurate match between multi-modal images. Furthermore, due to reasons such as ethical issues and need for human expert intervention, it is difficult to collect a large database of labelled multi-modal medical images. In addition, manual input is required to determine the fixed and moving images as input to registration algorithms. In this paper, we address these issues and introduce a registration framework that (1) creates synthetic data to augment existing datasets, (2) generates ground truth data to be used in the training and testing of algorithms, (3) registers (using a combination of deep learning and conventional machine learning methods) multi-modal images in an accurate and fast manner, and (4) automatically classifies the image modality so that the process of registration can be fully automated. We validate the performance of the proposed framework on CT and MRI images of the head obtained from a publicly available registration database.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 1098
Author(s):  
Gerardo Chacón ◽  
Johel E. Rodríguez ◽  
Valmore Bermúdez ◽  
Miguel Vera ◽  
Juan Diego Hernández ◽  
...  

Background: The multi–slice computerized tomography (MSCT) is a medical imaging modality that has been used to determine the size and location of the stomach cancer. Additionally, MSCT is considered the best modality for the staging of gastric cancer. One way to assess the type 2 cancer of stomach is by detecting the pathological structure with an image segmentation approach. The tumor segmentation of MSCT gastric cancer images enables the diagnosis of the disease condition, for a given patient, without using an invasive method as surgical intervention. Methods: This approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non homogeneities present in the background of MSCT images. Then, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a three–dimensional (3–D) computer graphics procedure based on marching cubes algorithm. In order to validate the segmentations, the Dice score is used as a metric function useful for comparing the segmentations obtained using the proposed method with respect to ground truth volumes traced by a clinician. Results: A total of 8 datasets available for patients diagnosed, from the cancer data collection of the project, Cancer Genome Atlas Stomach Adenocarcinoma (TCGASTAD) is considered in this research. The volume of the type 2 stomach tumor is estimated from the 3–D shape computationally segmented from the each dataset. These 3–D shapes are computationally reconstructed and then used to assess the morphopathology macroscopic features of this cancer. Conclusions: The segmentations obtained are useful for assessing qualitatively and quantitatively the stomach type 2 cancer. In addition, this type of segmentation allows the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.


Sign in / Sign up

Export Citation Format

Share Document