scholarly journals Temporal synchrony accompany with structure cue is more effective in the segmentation task

2021 ◽  
Vol 21 (9) ◽  
pp. 2137
Author(s):  
Yen-Ju Chen ◽  
Pi-Chun Huang
Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1952
Author(s):  
May Phu Paing ◽  
Supan Tungjitkusolmun ◽  
Toan Huy Bui ◽  
Sarinporn Visitsattapongse ◽  
Chuchart Pintavirooj

Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Matthew D. Guay ◽  
Zeyad A. S. Emam ◽  
Adam B. Anderson ◽  
Maria A. Aronova ◽  
Irina D. Pokrovskaya ◽  
...  

AbstractBiologists who use electron microscopy (EM) images to build nanoscale 3D models of whole cells and their organelles have historically been limited to small numbers of cells and cellular features due to constraints in imaging and analysis. This has been a major factor limiting insight into the complex variability of cellular environments. Modern EM can produce gigavoxel image volumes containing large numbers of cells, but accurate manual segmentation of image features is slow and limits the creation of cell models. Segmentation algorithms based on convolutional neural networks can process large volumes quickly, but achieving EM task accuracy goals often challenges current techniques. Here, we define dense cellular segmentation as a multiclass semantic segmentation task for modeling cells and large numbers of their organelles, and give an example in human blood platelets. We present an algorithm using novel hybrid 2D–3D segmentation networks to produce dense cellular segmentations with accuracy levels that outperform baseline methods and approach those of human annotators. To our knowledge, this work represents the first published approach to automating the creation of cell models with this level of structural detail.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Yang Yu ◽  
Hongqing Zhu

AbstractDue to the complex morphology and characteristic of retinal vessels, it remains challenging for most of the existing algorithms to accurately detect them. This paper proposes a supervised retinal vessels extraction scheme using constrained-based nonnegative matrix factorization (NMF) and three dimensional (3D) modified attention U-Net architecture. The proposed method detects the retinal vessels by three major steps. First, we perform Gaussian filter and gamma correction on the green channel of retinal images to suppress background noise and adjust the contrast of images. Then, the study develops a new within-class and between-class constrained NMF algorithm to extract neighborhood feature information of every pixel and reduce feature data dimension. By using these constraints, the method can effectively gather similar features within-class and discriminate features between-class to improve feature description ability for each pixel. Next, this study formulates segmentation task as a classification problem and solves it with a more contributing 3D modified attention U-Net as a two-label classifier for reducing computational cost. This proposed network contains an upsampling to raise image resolution before encoding and revert image to its original size with a downsampling after three max-pooling layers. Besides, the attention gate (AG) set in these layers contributes to more accurate segmentation by maintaining details while suppressing noises. Finally, the experimental results on three publicly available datasets DRIVE, STARE, and HRF demonstrate better performance than most existing methods.


2021 ◽  
Vol 11 (10) ◽  
pp. 4554
Author(s):  
João F. Teixeira ◽  
Mariana Dias ◽  
Eva Batista ◽  
Joana Costa ◽  
Luís F. Teixeira ◽  
...  

The scarcity of balanced and annotated datasets has been a recurring problem in medical image analysis. Several researchers have tried to fill this gap employing dataset synthesis with adversarial networks (GANs). Breast magnetic resonance imaging (MRI) provides complex, texture-rich medical images, with the same annotation shortage issues, for which, to the best of our knowledge, no previous work tried synthesizing data. Within this context, our work addresses the problem of synthesizing breast MRI images from corresponding annotations and evaluate the impact of this data augmentation strategy on a semantic segmentation task. We explored variations of image-to-image translation using conditional GANs, namely fitting the generator’s architecture with residual blocks and experimenting with cycle consistency approaches. We studied the impact of these changes on visual verisimilarity and how an U-Net segmentation model is affected by the usage of synthetic data. We achieved sufficiently realistic-looking breast MRI images and maintained a stable segmentation score even when completely replacing the dataset with the synthetic set. Our results were promising, especially when concerning to Pix2PixHD and Residual CycleGAN architectures.


Author(s):  
Rohit Mohan ◽  
Abhinav Valada

AbstractUnderstanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date.


2021 ◽  
Vol 9 (3) ◽  
pp. 1-22
Author(s):  
Akram Abdel Qader

Image segmentation is the most important process in road sign detection and classification systems. In road sign systems, the spatial information of road signs are very important for safety issues. Road sign segmentation is a complex segmentation task because of the different road sign colors and shapes that make it difficult to use specific threshold. Most road sign segmentation studies do good in ideal situations, but many problems need to be solved when the road signs are in poor lighting and noisy conditions. This paper proposes a hybrid dynamic threshold color segmentation technique for road sign images. In a pre-processing step, the authors use the histogram analysis, noise reduction with a Gaussian filter, adaptive histogram equalization, and conversion from RGB space to YCbCr or HSV color spaces. Next, a segmentation threshold is selected dynamically and used to segment the pre-processed image. The method was tested on outdoor images under noisy conditions and was able to accurately segment road signs with different colors (red, blue, and yellow) and shapes.


2002 ◽  
Vol 282 (1) ◽  
pp. H372-H379 ◽  
Author(s):  
Bradley T. Wyman ◽  
William C. Hunter ◽  
Frits W. Prinzen ◽  
Owen P. Faris ◽  
Elliot R. McVeigh

Resynchronization is frequently used for the treatment of heart failure, but the mechanism for improvement is not entirely clear. In the present study, the temporal synchrony and spatiotemporal distribution of left ventricular (LV) contraction was investigated in eight dogs during right atrial (RA), right ventricular apex (RVa), and biventricular (BiV) pacing using tagged magnetic resonance imaging. Mechanical activation (MA; the onset of circumferential shortening) was calculated from the images throughout the left ventricle for each pacing protocol. MA width (time for 20–90% of the left ventricle to contract) was significantly shorter during RA (43.6 ± 17.1 ms) than BiV and RVa pacing (67.4 ± 15.2 and 77.6 ± 16.4 ms, respectively). The activation delay vector (net delay in MA from one side of the left ventricle to the other) was significantly shorter during RA (18.9 ± 8.1 ms) and BiV (34.2 ± 18.3 ms) than during RVa (73.8 ± 16.3 ms) pacing. Rate of LV pressure increase was significantly lower during RVa than RA pacing (1,070 ± 370 vs. 1,560 ± 300 mmHg/s) with intermediate values for BiV pacing (1,310 ± 220 mmHg/s). BiV pacing has a greater impact on correcting the spatial distribution of LV contraction than on improving the temporal synchronization of contraction. Spatiotemporal distribution of contraction may be an important determinant of ventricular function.


1995 ◽  
Vol 06 (04) ◽  
pp. 373-399 ◽  
Author(s):  
ANDREAS S. WEIGEND ◽  
MORGAN MANGEAS ◽  
ASHOK N. SRIVASTAVA

In the analysis and prediction of real-world systems, two of the key problems are nonstationarity (often in the form of switching between regimes), and overfitting (particularly serious for noisy processes). This article addresses these problems using gated experts, consisting of a (nonlinear) gating network, and several (also nonlinear) competing experts. Each expert learns to predict the conditional mean, and each expert adapts its width to match the noise level in its regime. The gating network learns to predict the probability of each expert, given the input. This article focuses on the case where the gating network bases its decision on information from the inputs. This can be contrasted to hidden Markov models where the decision is based on the previous state(s) (i.e. on the output of the gating network at the previous time step), as well as to averaging over several predictors. In contrast, gated experts soft-partition the input space, only learning to model their region. This article discusses the underlying statistical assumptions, derives the weight update rules, and compares the performance of gated experts to standard methods on three time series: (1) a computer-generated series, obtained by randomly switching between two nonlinear processes; (2) a time series from the Santa Fe Time Series Competition (the light intensity of a laser in chaotic state); and (3) the daily electricity demand of France, a real-world multivariate problem with structure on several time scales. The main results are: (1) the gating network correctly discovers the different regimes of the process; (2) the widths associated with each expert are important for the segmentation task (and they can be used to characterize the sub-processes); and (3) there is less overfitting compared to single networks (homogeneous multilayer perceptrons), since the experts learn to match their variances to the (local) noise levels. This can be viewed as matching the local complexity of the model to the local complexity of the data.


Sign in / Sign up

Export Citation Format

Share Document