scholarly journals WaveletSEG: Automatic wavelet-based 3D nuclei segmentation and analysis for multicellular embryo quantification

2020 ◽  
Author(s):  
Tzu-Ching Wu ◽  
Xu Wang ◽  
Linlin Li ◽  
Ye Bu ◽  
David M. Umulis

AbstractIdentification of individual cells in tissues, organs, and in various developing systems is a well-studied problem because it is an essential part of objectively analyzing quantitative images in numerous biological contexts. We developed a size-dependent wavelet-based segmentation method that provides robust segmentation without any preprocessing, filtering or fine-tuning steps, and is robust to the signal-to-noise ratio (SNR). The wavelet-based method achieves robust segmentation results with respect to True Positive rate, Precision, and segmentation accuracy compared with other commonly used methods. We applied the segmentation program to zebrafish embryonic development IN TOTO for nuclei segmentation, image registration, and nuclei shape analysis. These new approaches to segmentation provide a means to carry out quantitative patterning analysis with single-cell precision throughout three dimensional tissues and embryos and they have a high tolerance for non-uniform and noisy image data sets.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tzu-Ching Wu ◽  
Xu Wang ◽  
Linlin Li ◽  
Ye Bu ◽  
David M. Umulis

AbstractIdentification of individual cells in tissues, organs, and in various developing systems is a well-studied problem because it is an essential part of objectively analyzing quantitative images in numerous biological contexts. We developed a size-dependent wavelet-based segmentation method that provides robust segmentation without any preprocessing, filtering or fine-tuning steps, and is robust to the signal-to-noise ratio. The wavelet-based method achieves robust segmentation results with respect to True Positive rate, Precision, and segmentation accuracy compared with other commonly used methods. We applied the segmentation program to zebrafish embryonic development IN TOTO for nuclei segmentation, image registration, and nuclei shape analysis. These new approaches to segmentation provide a means to carry out quantitative patterning analysis with single-cell precision throughout three dimensional tissues and embryos and they have a high tolerance for non-uniform and noisy image data sets.


Biomolecules ◽  
2019 ◽  
Vol 9 (12) ◽  
pp. 809
Author(s):  
Miguel Carrasco ◽  
Patricio Toledo ◽  
Nicole D. Tischler

Segmentation is one of the most important stages in the 3D reconstruction of macromolecule structures in cryo-electron microscopy. Due to the variability of macromolecules and the low signal-to-noise ratio of the structures present, there is no generally satisfactory solution to this process. This work proposes a new unsupervised particle picking and segmentation algorithm based on the composition of two well-known image filters: Anisotropic (Perona–Malik) diffusion and non-negative matrix factorization. This study focused on keyhole limpet hemocyanin (KLH) macromolecules which offer both a top view and a side view. Our proposal was able to detect both types of views and separate them automatically. In our experiments, we used 30 images from the KLH dataset of 680 positive classified regions. The true positive rate was 95.1% for top views and 77.8% for side views. The false negative rate was 14.3%. Although the false positive rate was high at 21.8%, it can be lowered with a supervised classification technique.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Xinhua Xiao ◽  
Andrew Ferro ◽  
Tao Ma ◽  
Chia Y. Han ◽  
Xuefu Zhou ◽  
...  

The automatic radioscopic inspection of industrial parts usually uses reference based methods. These methods select, as benchmark for comparison, image data from good parts to detect the anomalies of parts under inspection. However, parts can vary within the specification during the production process, which makes comparison of older reference image sets with current images of parts difficult and increases the probability of false rejections. To counter this variability, the reference image sets have to be updated. This paper proposes an adaptive reference image set selection procedure to be used in the assisted defect recognition (ADR) system in turbine blade inspection. The procedure first selects an initial reference image set using an approach called ADR Model Optimizer and then uses positive rate in a sliding-time window to determine the need to update the reference image set. Whenever there is a need, the ADR Model Optimizer is retrained with new data consisting of the old reference image sets augmented with false rejected images to generate a new reference image set. The experimental result demonstrates that the proposed procedure can adaptively select a reference image set, leading to an inspection process with a high true positive rate and a low false positive rate.


2019 ◽  
Vol 487 (4) ◽  
pp. 5346-5362 ◽  
Author(s):  
Suk Sien Tie ◽  
David H Weinberg ◽  
Paul Martini ◽  
Wei Zhu ◽  
Sébastien Peirani ◽  
...  

ABSTRACT Using the Lyman α (Lyα) Mass Association Scheme, we make theoretical predictions for the three-dimensional three-point correlation function (3PCF) of the Lyα forest at redshift z = 2.3. We bootstrap results from the (100 h−1 Mpc)3 Horizon hydrodynamic simulation to a (1 h−1 Gpc)3N-body simulation, considering both a uniform ultraviolet background (UVB) and a fluctuating UVB sourced by quasars with a comoving nq ≈ 10−5h3 Mpc−3 placed either in massive haloes or randomly. On scales of 10–30 h−1 Mpc, the flux 3PCF displays hierarchical scaling with the square of the two-point correlation function (2PCF), but with an unusual value of Q ≡ ζ123/(ξ12ξ13 + ξ12ξ23 + ξ13ξ23) ≈ −4.5 that reflects the low bias of the Lyα forest and the anticorrelation between mass density and transmitted flux. For halo-based quasars and an ionizing photon mean free path of λ = 300 h−1 Mpc comoving, UVB fluctuations moderately depress the 2PCF and 3PCF, with cancelling effects on Q. For λ = 100 or 50 h−1 Mpc, UVB fluctuations substantially boost the 2PCF and 3PCF on large scales, shifting the hierarchical ratio to Q ≈ −3. We scale our simulation results to derive rough estimate of the detectability of the 3PCF in current and future observational data sets for the redshift range z = 2.1–2.6. At r = 10 and 20 h−1 Mpc, we predict a signal-to-noise ratio (SNR) of ∼9 and ∼7, respectively, for both Baryon Oscillation Spectroscopic Survey (BOSS) and extended BOSS (eBOSS), and ∼37 and ∼25 for Dark Energy Spectroscopic Instrument (DESI). At r = 40 h−1 Mpc the predicted SNR is lower by a factor of ∼3–5. Measuring the flux 3PCF would provide a novel test of the conventional paradigm of the Lyα forest and help separate the contributions of UVB fluctuations and density fluctuations to Lyα forest clustering, thereby solidifying its foundation as a tool of precision cosmology.


2021 ◽  
Author(s):  
Tim Scherr ◽  
Katharina Loeffler ◽  
Oliver Neumann ◽  
Ralf Mikut

The virtually error-free segmentation and tracking of densely packed cells and cell nuclei is still a challenging task. Especially in low-resolution and low signal-to-noise-ratio microscopy images erroneously merged and missing cells are common segmentation errors making the subsequent cell tracking even more difficult. In 2020, we successfully participated as team KIT-Sch-GE (1) in the 5th edition of the ISBI Cell Tracking Challenge. With our deep learning-based distance map regression segmentation and our graph-based cell tracking, we achieved multiple top 3 rankings on the diverse data sets. In this manuscript, we show how our approach can be further improved by using another optimizer and by fine-tuning training data augmentation parameters, learning rate schedules, and the training data representation. The fine-tuned segmentation in combination with an improved tracking enabled to further improve our performance in the 6th edition of the Cell Tracking Challenge 2021 as team KIT-Sch-GE (2).


Author(s):  
Naghmeh Moradpoor Sheykhkanloo

Structured Query Language injection (SQLi) attack is a code injection technique where hackers inject SQL commands into a database via a vulnerable web application. Injected SQL commands can modify the back-end SQL database and thus compromise the security of a web application. In the previous publications, the author has proposed a Neural Network (NN)-based model for detections and classifications of the SQLi attacks. The proposed model was built from three elements: 1) a Uniform Resource Locator (URL) generator, 2) a URL classifier, and 3) a NN model. The proposed model was successful to: 1) detect each generated URL as either a benign URL or a malicious, and 2) identify the type of SQLi attack for each malicious URL. The published results proved the effectiveness of the proposal. In this paper, the author re-evaluates the performance of the proposal through two scenarios using controversial data sets. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed model in terms of accuracy, true-positive rate as well as false-positive rate.


Author(s):  
Cansu Görürgöz ◽  
Kaan Orhan ◽  
Ibrahim Sevki Bayrakdar ◽  
Özer Çelik ◽  
Elif Bilgir ◽  
...  

Objectives: The present study aimed to evaluate the performance of a Faster Region-based Convolutional Neural Network (R-CNN) algorithm for tooth detection and numbering on periapical images. Methods: The data sets of 1686 randomly selected periapical radiographs of patients were collected retrospectively. A pre-trained model (GoogLeNet Inception v3 CNN) was employed for pre-processing, and transfer learning techniques were applied for data set training. The algorithm consisted of: (1) the Jaw classification model, (2) Region detection models, and (3) the Final algorithm using all models. Finally, an analysis of the latest model has been integrated alongside the others. The sensitivity, precision, true-positive rate, and false-positive/negative rate were computed to analyze the performance of the algorithm using a confusion matrix. Results: An artificial intelligence algorithm (CranioCatch, Eskisehir-Turkey) was designed based on R-CNN inception architecture to automatically detect and number the teeth on periapical images. Of 864 teeth in 156 periapical radiographs, 668 were correctly numbered in the test data set. The F1 score, precision, and sensitivity were 0.8720, 0.7812, and 0.9867, respectively. Conclusion: The study demonstrated the potential accuracy and efficiency of the CNN algorithm for detecting and numbering teeth. The deep learning-based methods can help clinicians reduce workloads, improve dental records, and reduce turnaround time for urgent cases. This architecture might also contribute to forensic science.


1995 ◽  
Vol 15 (4) ◽  
pp. 552-565 ◽  
Author(s):  
Weizhao Zhao ◽  
Myron D. Ginsberg ◽  
David W. Smith

Traditional autoradiographic image analysis has been restricted to the two-dimensional assessment of local cerebral glucose utilization (LCMRglc) or blood flow in individual brains. It is advantageous, however, to generate an entire three-dimensional (3D) data set and to develop the ability to map replicate images derived from multiple studies into the same 3D space, so as to generate average and standard deviation images for the entire series. We have developed a novel method, termed “disparity analysis,” for the alignment and mapping of autoradiographic images. We present the theory of this method, which is based upon a linear affine model, to analyze point-to-point disparities in two images. The method is a direct one that estimates scaling, translation, and rotation parameters simultaneously. Disparity analysis is general and flexible and deals well with damaged or asymmetric sections. We applied this method to study LCMRglc in nine awake male Wistar rats by the [14C]2-deoxyglucose method. Brains were physically aligned in the anteroposterior axis and were sectioned subserially at 100-μm intervals. For each brain, coronal sections were aligned by disparity analysis. The nine brains were then registered in the z-axis with respect to a common coronal reference level (bregma + 0.7 mm). Eight of the nine brains were mapped into the remaining brain, which was designated the “template,” and aggregate 3D data sets were generated of the mean and standard deviation for the entire series. The averaged images retained the major anatomic features apparent in individual brains but with some defocusing. Internal anatomic features of the averaged brain were smooth, continuous, and readily identifiable on sections through the 3D stack. The fidelity of the internal architecture of the averaged brain was compared with that of individual brains by analysis of line scans at four representative levels. Line scan comparisons between corresponding sections and their template showed a high degree of correlation, as did similar comparisons performed on entire sections. Fourier analysis of line scan data showed retention of low-frequency information with the expected attenuation of high-frequency components produced by averaging. Region-of-interest (ROI) analysis of the averaged brain yielded LCMRglc values virtually identical to those derived from measurements and subsequent averaging of data from individual brains. In summary, 3D reconstruction of averaged autoradiographic image data by disparity analysis is a feasible approach, which vastly simplifies ROI analysis, facilitates the assessment of hemodynamic or metabolic patterns in three dimensions, permits the computation of threshold-defined volumes of interest on averaged 3D data sets, makes possible atlas-based ROI strategies, and importantly provides the capability of generating 3D image data sets derived from arithmetic manipulations on two or more primary data sets (e.g., percent difference or ratio images).


2019 ◽  
Vol 11 (8) ◽  
pp. 168781401987139
Author(s):  
Shyh-Kuang Ueng ◽  
Hsin-Cheng Huang ◽  
Chieh-Shih Chou ◽  
Hsuan-Kai Huang

Layered manufacturing techniques have been successfully employed to construct scanned objects from three-dimensional medical image data sets. The printed physical models are useful tools for anatomical exploration, surgical planning, teaching, and related medical applications. Before fabricating scanned objects, we have to first build watertight geometrical representations of the target objects from medical image data sets. Many algorithms had been developed to fulfill this duty. However, some of these methods require extra efforts to resolve ambiguity problems and to fix broken surfaces. Other methods cannot generate legitimate models for layered manufacturing. To alleviate these problems, this article presents a modeling procedure to efficiently create geometrical representations of objects from computerized tomography scan and magnetic resonance imaging data sets. The proposed procedure extracts the iso-surface of the target object from the input data set at the first step. Then it converts the iso-surface into a three-dimensional image and filters this three-dimensional image using morphological operators to remove dangling parts and noises. At the next step, a distance field is computed in the three-dimensional image space to approximate the surface of the target object. Then the proposed procedure smooths the distance field to soothe sharp corners and edges of the target object. Finally, a boundary representation is built from the distance field to model the target object. Compared with conventional modeling techniques, the proposed method possesses the following advantages: (1) it reduces human efforts involved in the geometrical modeling process. (2) It can construct both solid and hollow models for the target object, and wall thickness of the hollow models is adjustable. (3) The resultant boundary representation guarantees to form a watertight solid geometry, which is printable using three-dimensional printers. (4) The proposed procedure allows users to tune the precision of the geometrical model to compromise with the available computational resources.


2020 ◽  
Vol 100 (1) ◽  
pp. 38-43
Author(s):  
Tawfiq Khurayzi ◽  
Anandhan Dhanasingh ◽  
Fida Almuhawas ◽  
Abdurrahman Alsanosi

Objective: The objective of this study was to determine the shape of cochlear basal turn through basic cochlear parameters measurement. The secondary aim was to overlay an image of the precurved electrode array on top of the three-dimensional (3D) image of the cochlea to determine which shape of the cochlear basal turn gives optimal electrode-to-modiolus proximity. Materials and Methods: Computed tomography (CT) preoperative image-data sets of 117 ears were made available for the measurements of cochlear parameters retrospectively. Three-dimensional slicer was used in the visualization and measurement of cochlear parameters from both 3D and 2D (2-dimensional) images of the inner ear. Cochlear parameters including basal turn diameter ( A), width of the basal turn ( B), and cochlear height (H) were measured from the appropriate planes. B/ A ratio was made to investigate which ratios correspond to round and elliptical shape of the cochlear basal turn. Results: The cochlear size as measured by A value ranged between 7.4 mm and 10 mm. The B value and the cochlear height ( H) showed a weak positive linear relation with A value. The ratio between the B and A values anything above or below 0.75 could be an indicator for a more round- or elliptical shaped cochlear basal turn, respectively. One sized/shaped commercially available precurved electrode array would not offer a tight electrode-to-modiolus in the cochlea that has an elliptical shaped basal turn as identified by the B/A ratio of <0.75. Conclusion: Accurate measurement of cochlear parameters adds value to the overall understanding of the cochlear geometry before a cochlear implantation procedure. The shape of cochlear basal turn could have clinical implications when comes to electrode-to-modiolus proximity.


Sign in / Sign up

Export Citation Format

Share Document