robust segmentation
Recently Published Documents


TOTAL DOCUMENTS

243
(FIVE YEARS 65)

H-INDEX

22
(FIVE YEARS 4)

IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
Grzegorz Chlebus ◽  
Andrea Schenk ◽  
Horst K. Hahn ◽  
Bram Van Ginneken ◽  
Hans Meine

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7966
Author(s):  
Dixiao Wei ◽  
Qiongshui Wu ◽  
Xianpei Wang ◽  
Meng Tian ◽  
Bowen Li

Radiography is an essential basis for the diagnosis of fractures. For the pediatric elbow joint diagnosis, the doctor needs to diagnose abnormalities based on the location and shape of each bone, which is a great challenge for AI algorithms when interpreting radiographs. Bone instance segmentation is an effective upstream task for automatic radiograph interpretation. Pediatric elbow bone instance segmentation is a process by which each bone is extracted separately from radiography. However, the arbitrary directions and the overlapping of bones pose issues for bone instance segmentation. In this paper, we design a detection-segmentation pipeline to tackle these problems by using rotational bounding boxes to detect bones and proposing a robust segmentation method. The proposed pipeline mainly contains three parts: (i) We use Faster R-CNN-style architecture to detect and locate bones. (ii) We adopt the Oriented Bounding Box (OBB) to improve the localizing accuracy. (iii) We design the Global-Local Fusion Segmentation Network to combine the global and local contexts of the overlapped bones. To verify the effectiveness of our proposal, we conduct experiments on our self-constructed dataset that contains 1274 well-annotated pediatric elbow radiographs. The qualitative and quantitative results indicate that the network significantly improves the performance of bone extraction. Our methodology has good potential for applying deep learning in the radiography’s bone instance segmentation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
C. Bouvier ◽  
N. Souedet ◽  
J. Levy ◽  
C. Jan ◽  
Z. You ◽  
...  

AbstractIn preclinical research, histology images are produced using powerful optical microscopes to digitize entire sections at cell scale. Quantification of stained tissue relies on machine learning driven segmentation. However, such methods require multiple additional information, or features, which are increasing the quantity of data to process. As a result, the quantity of features to deal with represents a drawback to process large series or massive histological images rapidly in a robust manner. Existing feature selection methods can reduce the amount of required information but the selected subsets lack reproducibility. We propose a novel methodology operating on high performance computing (HPC) infrastructures and aiming at finding small and stable sets of features for fast and robust segmentation of high-resolution histological images. This selection has two steps: (1) selection at features families scale (an intermediate pool of features, between spaces and individual features) and (2) feature selection performed on pre-selected features families. We show that the selected sets of features are stables for two different neuron staining. In order to test different configurations, one of these dataset is a mono-subject dataset and the other is a multi-subjects dataset to test different configurations. Furthermore, the feature selection results in a significant reduction of computation time and memory cost. This methodology will allow exhaustive histological studies at a high-resolution scale on HPC infrastructures for both preclinical and clinical research.


2021 ◽  
Vol 8 (06) ◽  
Author(s):  
Yubo Fan ◽  
Dongqing Zhang ◽  
Rueben Banalagay ◽  
Jianing Wang ◽  
Jack H. Noble ◽  
...  

2021 ◽  
Author(s):  
Mauro Silberberg ◽  
Hernán Edgardo Grecco

Quantitative analysis of high-throughput microscopy images requires robust automated algorithms. Background estimation is usually the first step and has an impact on all subsequent analysis, in particular for foreground detection and calculation of ratiometric quantities. Most methods recover only a single background value, such as the median. Those that aim to retrieve a background distribution by dividing the intensity histogram yield a biased estimation in images in non-trivial cases. In this work, we present the first method to recover an unbiased estimation of the background distribution directly from an image and without any additional input. Through a robust statistical test, our method leverages the lack of local spatial correlation in background pixels to select a subset of pixels that accurately represent the background distribution. This method is both fast and simple to implement, as it only uses standard mathematical operations and an averaging filter. Additionally, the only parameter, the size of the averaging filter, does not require fine tuning. The obtained background distribution can be used to test for foreground membership of individual pixels, or to estimate confidence intervals in derived quantities. We expect that the concepts described in this work can help to develop a novel family of robust segmentation methods.


2021 ◽  
Vol 26 (5) ◽  
pp. 736-748
Author(s):  
Ling Zhang ◽  
Jianchao Liu ◽  
Fangxing Shang ◽  
Gang Li ◽  
Juming Zhao ◽  
...  

Author(s):  
Qiuyu Song ◽  
Chengmao Wu ◽  
Xiaoping Tian ◽  
Yue Song ◽  
Xiaokang Guo

AbstractFuzzy clustering algorithm (FCM) can be directly used to segment images, it takes no account of the neighborhood information of the current pixel and does not have a robust segmentation noise suppression. Fuzzy Local Information C-means Clustering (FLICM) is a widely used robust segmentation algorithm, which combines spatial information with the membership degree of adjacent pixels. In order to further improve the robustness of FLICM algorithm, non-local information is embedded into FLICM algorithm and a fuzzy C-means clustering algorithm has local and non-local information (FLICMLNLI) is obtained. When calculating distance from pixel to cluster center, FLICMLNLI algorithm considers two distances from current pixel and its neighborhood pixels to cluster center. However, the algorithm gives the same weight to two different distances, which incorrectly magnifies the importance of neighborhood information in calculating the distance, resulting in unsatisfactory image segmentation effects and loss of image details. In order to solve this problem, we raise an improved self-learning weighted fuzzy algorithm, which directly obtains different weights in distance calculation through continuous iterative self-learning, then the distance metric with the weights obtained from self-learning is embedded in the objective function of the fuzzy clustering algorithm in order to improve the segmentation performance and robustness of the algorithm. A large number of experiments on different types of images show that the algorithm can not only suppress the noise but also retain the details in the image, the effect of segmenting complex noise images is better, and it provides better image segmentation results than the existing latest fuzzy clustering algorithms.


Author(s):  
Weikuan Jia ◽  
Zhonghua Zhang ◽  
Wenjiang Shao ◽  
Ze Ji ◽  
Sujuan Hou
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Young Jae Kim ◽  
Seung Ro Lee ◽  
Ja-Young Choi ◽  
Kwang Gi Kim

Loss of knee cartilage can cause intense pain at the knee epiphysis and this is one of the most common diseases worldwide. To diagnose this condition, the distance between the femur and tibia is calculated based on X-ray images. Accurate segmentation of the femur and tibia is required to assist in the calculation process. Several studies have investigated the use of automatic knee segmentation to assist in the calculation process, but the results are of limited value owing to the complexity of the knee. To address this problem, this study exploits deep learning for robust segmentation not affected by the environment. In addition, the Taguchi method is applied to optimize the deep learning results. Deep learning architecture, optimizer, and learning rate are considered for the Taguchi table to check the impact and interaction of the results. When the Dilated-Resnet architecture is used with the Adam optimizer and a learning rate of 0.001, dice coefficients of 0.964 and 0.942 are obtained for the femur and tibia for knee segmentation. The implemented procedure and the results of this investigation may be beneficial to help in determining the correct margins for the femur and tibia and can be the basis for developing an automatic diagnosis algorithm for orthopedic diseases.


Author(s):  
Michail Mamalakis ◽  
Pankaj Garg ◽  
Tom Nelson ◽  
Justin Lee ◽  
Jim M. Wild ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document