distance map
Recently Published Documents


TOTAL DOCUMENTS

139
(FIVE YEARS 51)

H-INDEX

12
(FIVE YEARS 3)

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 250
Author(s):  
Xiaoyang Huang ◽  
Zhi Lin ◽  
Yudi Jiao ◽  
Moon-Tong Chan ◽  
Shaohui Huang ◽  
...  

With the rise of deep learning, using deep learning to segment lesions and assist in diagnosis has become an effective means to promote clinical medical analysis. However, the partial volume effect of organ tissues leads to unclear and blurred edges of ROI in medical images, making it challenging to achieve high-accuracy segmentation of lesions or organs. In this paper, we assume that the distance map obtained by performing distance transformation on the ROI edge can be used as a weight map to make the network pay more attention to the learning of the ROI edge region. To this end, we design a novel framework to flexibly embed the distance map into the two-stage network to improve left atrium MRI segmentation performance. Furthermore, a series of distance map generation methods are proposed and studied to reasonably explore how to express the weight of assisting network learning. We conduct thorough experiments to verify the effectiveness of the proposed segmentation framework, and experimental results demonstrate that our hypothesis is feasible.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 3040
Author(s):  
Cheonin Oh ◽  
Hyungwoo Kim ◽  
Hyeonjoong Cho

Pattern images can be segmented in a template unit for efficient fabric vision inspection; however, segmentation criteria critically affect the segmentation and defect detection performance. To get the undistorted criteria for rotated images, rotation estimation of absolute angle needs to be proceeded. Given that conventional rotation estimations do not satisfy both rotation errors and computation times, patterned fabric defects are detected using manual visual methods. To solve these problems, this study proposes the application of segmentation reference point candidate (SRPC), generated based on a Euclidean distance map (EDM). SRPC is used to not only extract criteria points but also estimate rotation angle. The rotation angle is predicted using the orientation vector of SRPC instead of all pixels to reduce estimation times. SRPC-based image segmentation increases the robustness against the rotation angle and defects. The separation distance value for SRPC area distinction is calculated automatically. The performance of the proposed method is similar to state-of-the-art rotation estimation methods, with a suitable inspection time in actual operations for patterned fabric. The similarity between the segmented images is better than conventional methods. The proposed method extends the target of vision inspection on plane fabric to checked or striped pattern.


2021 ◽  
Vol 13 (23) ◽  
pp. 4881
Author(s):  
Yuxi Sun ◽  
Chengrui Zhang

Autonomous exploration and remote sensing using robots have gained increasing attention in recent years and aims to maximize information collection regarding the external world without human intervention. However, incomplete frontier detection, an inability to eliminate inefficient frontiers, and incomplete evaluation limit further improvements in autonomous exploration efficiency. This article provides a systematic solution for ground mobile robot exploration with high efficiency. Firstly, an integrated frontier detection and maintenance method is proposed, which incrementally discovers potential frontiers and achieves incremental maintenance of the safe and informative frontiers by updating the distance map locally. Secondly, we propose a novel multiple paths planning method to generate multiple paths from the robot position to the unexplored frontiers. Then, we use the proposed utility function to select the optimal path and improve its smoothness using an iterative optimization strategy. Ultimately, the model predictive control (MPC) method is applied to track the smooth path. Simulation experiments on typical environments demonstrate that compared with the benchmark methods, the proposed method reduce the path length by 27.07% and the exploration time by 27.09% on average. The real-world experimental results also reveal that our proposed method can achieve complete mapping with fewer repetitive paths.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5457
Author(s):  
Sayed Haggag ◽  
Fahmi Khalifa ◽  
Hisham Abdeltawab ◽  
Ahmed Elnakib ◽  
Mohammed Ghazal ◽  
...  

Uveitis is one of the leading causes of severe vision loss that can lead to blindness worldwide. Clinical records show that early and accurate detection of vitreous inflammation can potentially reduce the blindness rate. In this paper, a novel framework is proposed for automatic quantification of the vitreous on optical coherence tomography (OCT) with particular application for use in the grading of vitreous inflammation. The proposed pipeline consists of two stages, vitreous region segmentation followed by a neural network classifier. In the first stage, the vitreous region is automatically segmented using a U-net convolutional neural network (U-CNN). For the input of U-CNN, we utilized three novel image descriptors to account for the visual appearance similarity of the vitreous region and other tissues. Namely, we developed an adaptive appearance-based approach that utilizes a prior shape information, which consisted of a labeled dataset of the manually segmented images. This image descriptor is adaptively updated during segmentation and is integrated with the original greyscale image and a distance map image descriptor to construct an input fused image for the U-net segmentation stage. In the second stage, a fully connected neural network (FCNN) is proposed as a classifier to assess the vitreous inflammation severity. To achieve this task, a novel discriminatory feature of the segmented vitreous region is extracted. Namely, the signal intensities of the vitreous are represented by a cumulative distribution function (CDF). The constructed CDFs are then used to train and test the FCNN classifier for grading (grade from 0 to 3). The performance of the proposed pipeline is evaluated on a dataset of 200 OCT images. Our segmentation approach documented a higher performance than related methods, as evidenced by the Dice coefficient of 0.988 ± 0.01 and Hausdorff distance of 0.0003 mm ± 0.001 mm. On the other hand, the FCNN classification is evidenced by its average accuracy of 86%, which supports the benefits of the proposed pipeline as an aid for early and objective diagnosis of uvea inflammation.


Healthcare ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 938
Author(s):  
Takaaki Sugino ◽  
Toshihiro Kawase ◽  
Shinya Onogi ◽  
Taichi Kin ◽  
Nobuhito Saito ◽  
...  

Brain structure segmentation on magnetic resonance (MR) images is important for various clinical applications. It has been automatically performed by using fully convolutional networks. However, it suffers from the class imbalance problem. To address this problem, we investigated how loss weighting strategies work for brain structure segmentation tasks with different class imbalance situations on MR images. In this study, we adopted segmentation tasks of the cerebrum, cerebellum, brainstem, and blood vessels from MR cisternography and angiography images as the target segmentation tasks. We used a U-net architecture with cross-entropy and Dice loss functions as a baseline and evaluated the effect of the following loss weighting strategies: inverse frequency weighting, median inverse frequency weighting, focal weighting, distance map-based weighting, and distance penalty term-based weighting. In the experiments, the Dice loss function with focal weighting showed the best performance and had a high average Dice score of 92.8% in the binary-class segmentation tasks, while the cross-entropy loss functions with distance map-based weighting achieved the Dice score of up to 93.1% in the multi-class segmentation tasks. The results suggested that the distance map-based and the focal weightings could boost the performance of cross-entropy and Dice loss functions in class imbalanced segmentation tasks, respectively.


Science ◽  
2021 ◽  
pp. eabj8754
Author(s):  
Minkyung Baek ◽  
Frank DiMaio ◽  
Ivan Anishchenko ◽  
Justas Dauparas ◽  
Sergey Ovchinnikov ◽  
...  

DeepMind presented remarkably accurate predictions at the recent CASP14 protein structure prediction assessment conference. We explored network architectures incorporating related ideas and obtained the best performance with a three-track network in which information at the 1D sequence level, the 2D distance map level, and the 3D coordinate level is successively transformed and integrated. The three-track network produces structure predictions with accuracies approaching those of DeepMind in CASP14, enables the rapid solution of challenging X-ray crystallography and cryo-EM structure modeling problems, and provides insights into the functions of proteins of currently unknown structure. The network also enables rapid generation of accurate protein-protein complex models from sequence information alone, short circuiting traditional approaches which require modeling of individual subunits followed by docking. We make the method available to the scientific community to speed biological research.


2021 ◽  
Author(s):  
Tim Scherr ◽  
Katharina Loeffler ◽  
Oliver Neumann ◽  
Ralf Mikut

The virtually error-free segmentation and tracking of densely packed cells and cell nuclei is still a challenging task. Especially in low-resolution and low signal-to-noise-ratio microscopy images erroneously merged and missing cells are common segmentation errors making the subsequent cell tracking even more difficult. In 2020, we successfully participated as team KIT-Sch-GE (1) in the 5th edition of the ISBI Cell Tracking Challenge. With our deep learning-based distance map regression segmentation and our graph-based cell tracking, we achieved multiple top 3 rankings on the diverse data sets. In this manuscript, we show how our approach can be further improved by using another optimizer and by fine-tuning training data augmentation parameters, learning rate schedules, and the training data representation. The fine-tuned segmentation in combination with an improved tracking enabled to further improve our performance in the 6th edition of the Cell Tracking Challenge 2021 as team KIT-Sch-GE (2).


2021 ◽  
Author(s):  
Minkyung Baek ◽  
Frank DiMaio ◽  
Ivan Anishchenko ◽  
Justas Dauparas ◽  
Sergey Ovchinnikov ◽  
...  

DeepMind presented remarkably accurate protein structure predictions at the CASP14 conference. We explored network architectures incorporating related ideas and obtained the best performance with a 3-track network in which information at the 1D sequence level, the 2D distance map level, and the 3D coordinate level is successively transformed and integrated. The 3-track network produces structure predictions with accuracies approaching those of DeepMind in CASP14, enables rapid solution of challenging X-ray crystallography and cryo-EM structure modeling problems, and provides insights into the functions of proteins of currently unknown structure. The network also enables rapid generation of accurate models of protein-protein complexes from sequence information alone, short circuiting traditional approaches which require modeling of individual subunits followed by docking. We make the method available to the scientific community to speed biological research.


2021 ◽  
Vol 11 ◽  
Author(s):  
He Huang ◽  
Guang Yang ◽  
Wenbo Zhang ◽  
Xiaomei Xu ◽  
Weiji Yang ◽  
...  

Glioma is the most common primary central nervous system tumor, accounting for about half of all intracranial primary tumors. As a non-invasive examination method, MRI has an extremely important guiding role in the clinical intervention of tumors. However, manually segmenting brain tumors from MRI requires a lot of time and energy for doctors, which affects the implementation of follow-up diagnosis and treatment plans. With the development of deep learning, medical image segmentation is gradually automated. However, brain tumors are easily confused with strokes and serious imbalances between classes make brain tumor segmentation one of the most difficult tasks in MRI segmentation. In order to solve these problems, we propose a deep multi-task learning framework and integrate a multi-depth fusion module in the framework to accurately segment brain tumors. In this framework, we have added a distance transform decoder based on the V-Net, which can make the segmentation contour generated by the mask decoder more accurate and reduce the generation of rough boundaries. In order to combine the different tasks of the two decoders, we weighted and added their corresponding loss functions, where the distance map prediction regularized the mask prediction. At the same time, the multi-depth fusion module in the encoder can enhance the ability of the network to extract features. The accuracy of the model will be evaluated online using the multispectral MRI records of the BraTS 2018, BraTS 2019, and BraTS 2020 datasets. This method obtains high-quality segmentation results, and the average Dice is as high as 78%. The experimental results show that this model has great potential in segmenting brain tumors automatically and accurately.


2021 ◽  
Vol 7 (6) ◽  
pp. 93
Author(s):  
Cefa Karabağ ◽  
Martin L. Jones ◽  
Constantino Carlos Reyes-Aldasoro

In this work, an unsupervised volumetric semantic instance segmentation of the plasma membrane of HeLa cells as observed with serial block face scanning electron microscopy is described. The resin background of the images was segmented at different slices of a 3D stack of 518 slices with 8192 × 8192 pixels each. The background was used to create a distance map, which helped identify and rank the cells by their size at each slice. The centroids of the cells detected at different slices were linked to identify them as a single cell that spanned a number of slices. A subset of these cells, i.e., the largest ones and those not close to the edges were selected for further processing. The selected cells were then automatically cropped to smaller regions of interest of 2000 × 2000 × 300 voxels that were treated as cell instances. Then, for each of these volumes, the nucleus was segmented, and the cell was separated from any neighbouring cells through a series of traditional image processing steps that followed the plasma membrane. The segmentation process was repeated for all the regions of interest previously selected. For one cell for which the ground truth was available, the algorithm provided excellent results in Accuracy (AC) and the Jaccard similarity Index (JI): nucleus: JI =0.9665, AC =0.9975, cell including nucleus JI =0.8711, AC =0.9655, cell excluding nucleus JI =0.8094, AC =0.9629. A limitation of the algorithm for the plasma membrane segmentation was the presence of background. In samples with tightly packed cells, this may not be available. When tested for these conditions, the segmentation of the nuclear envelope was still possible. All the code and data were released openly through GitHub, Zenodo and EMPIAR.


Sign in / Sign up

Export Citation Format

Share Document